modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-10 00:38:21
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 551
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-10 00:38:17
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
skymizer/Qwen3-30B-A3B-Thinking-2507-GGUF
|
skymizer
| 2025-09-09T16:07:10Z | 0 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-09T08:52:13Z |
---
license: apache-2.0
---
These models are converted from [Qwen/Qwen3-30B-A3B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507)
Please set up the generation config properly
- temperature = 0.6
- top_p = 0.95
- top_k = 20
- min_p = 0.0
- output tokens: 32768
Best Practices: https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507#best-practices
|
tagirarega/blockassist-bc-tricky_aquatic_piranha_1757434008
|
tagirarega
| 2025-09-09T16:06:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tricky aquatic piranha",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:06:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tricky aquatic piranha
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
raskbdifaslins/blockassist-bc-bipedal_wily_albatross_1757433921
|
raskbdifaslins
| 2025-09-09T16:05:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bipedal wily albatross",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:05:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bipedal wily albatross
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1757433883
|
Vasya777
| 2025-09-09T16:05:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:05:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dulmaranoldman/blockassist-bc-sly_pensive_whale_1757433895
|
dulmaranoldman
| 2025-09-09T16:05:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly pensive whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:05:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly pensive whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757433843
|
bah63843
| 2025-09-09T16:04:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:04:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dbbh03149/blockassist-bc-eager_armored_coyote_1757433866
|
dbbh03149
| 2025-09-09T16:04:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"eager armored coyote",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:04:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- eager armored coyote
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lukashossain3425/blockassist-bc-freckled_twitchy_wallaby_1757433838
|
lukashossain3425
| 2025-09-09T16:04:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"freckled twitchy wallaby",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:04:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- freckled twitchy wallaby
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jemijorna596/blockassist-bc-reclusive_monstrous_pig_1757433818
|
jemijorna596
| 2025-09-09T16:03:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"reclusive monstrous pig",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:03:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- reclusive monstrous pig
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
palmart111/blockassist-bc-armored_feline_capybara_1757433788
|
palmart111
| 2025-09-09T16:03:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored feline capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:03:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored feline capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aronlg/blockassist-bc-wiry_insectivorous_bat_1757433754
|
aronlg
| 2025-09-09T16:03:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry insectivorous bat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:03:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry insectivorous bat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
giovannidemuri/llama8b-er-v0-hx-seed2_lora
|
giovannidemuri
| 2025-09-09T16:03:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-09T13:51:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
oekaltegabi/blockassist-bc-tame_dormant_hyena_1757433753
|
oekaltegabi
| 2025-09-09T16:02:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tame dormant hyena",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:02:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tame dormant hyena
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
a1ex971/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-patterned_arctic_shrimp
|
a1ex971
| 2025-09-09T16:02:05Z | 168 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am patterned_arctic_shrimp",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-23T13:39:39Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am patterned_arctic_shrimp
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
collurasomer/blockassist-bc-nocturnal_majestic_badger_1757433674
|
collurasomer
| 2025-09-09T16:01:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"nocturnal majestic badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:01:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- nocturnal majestic badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
abhishekchohan/maesar-4B
|
abhishekchohan
| 2025-09-09T16:01:10Z | 8 | 1 | null |
[
"safetensors",
"qwen3",
"base_model:Qwen/Qwen3-4B-Thinking-2507",
"base_model:finetune:Qwen/Qwen3-4B-Thinking-2507",
"region:us"
] | null | 2025-09-08T17:56:26Z |
---
base_model:
- Qwen/Qwen3-4B-Thinking-2507
---
# Maesar
**Maesar-4B**, **Maesar-8B** and **Maesar-32B** are trained using advanced test-time scaling and budget enforcement techniques, specifically designed for autothinking with exceptional long generation capabilities. These models represent a significant advancement in adaptive reasoning, enabling dynamic resource allocation during inference to optimize both performance and computational efficiency.
## Model Details
### Model Description
Maesar-8B and Maesar-32B are transformer-based language models that implement novel training paradigms combining test-time scaling with budget enforcement mechanisms. The models are engineered to perform adaptive autothinking, dynamically switching between reasoning and direct response modes based on query complexity, while maintaining coherent long-form generation capabilities exceeding 16384+ tokens.
- **Architecture:** Transformer-based with adaptive reasoning layers
- **Parameters:** 4B (Maesar-4B), 8B (Maesar-8B), 32B (Maesar-32B)
- **Base Models:**
- **Maesar-4B:** Built on [Qwen/Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507)
- **Maesar-8B:** Built on [deepseek-ai/DeepSeek-R1-0528-Qwen3-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B)
- **Maesar-32B:** Built on [Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B)
## Key Features
### 🧠 Test-Time Scaling Architecture
- **Adaptive Resource Allocation:** Dynamic computational budget allocation based on query complexity
- **Compute-Optimal Strategy:** Up to 4x more efficient than traditional best-of-N baselines
- **FLOPs-Matched Performance:** Competitive with models 14x larger on reasoning tasks
### 🎯 Budget Enforcement Training
- **Dynamic Budget Control:** Intelligent resource management during training and inference
- **Efficiency Optimization:** Reduced computational overhead while maintaining quality
- **Scalable Performance:** Consistent performance across different computational budgets
### 🔄 Autothinking Capabilities
- **Adaptive Reasoning:** Automatic switching between step-by-step thinking and direct response
- **Query Complexity Classification:** Intelligent assessment of task difficulty
- **Steering Vector Guidance:** Advanced reasoning pattern guidance using activation-level steering
### 📝 Long Generation Excellence
- **Extended Output Length:** Capable of generating coherent text exceeding 10,000 words
- **Maintained Quality:** Consistent quality across long-form generation tasks
- **Diverse Applications:** Suitable for technical documentation, creative writing, and analytical reports
## Uses
### Direct Use
Maesar-8B and Maesar-32B are designed for:
- **Complex Reasoning Tasks:** Mathematical problem-solving, logical reasoning, and multi-step analysis
- **Long-Form Content Generation:** Technical documentation, research reports, creative writing
- **Adaptive Question Answering:** Dynamic response complexity based on query requirements
- **Code Generation and Analysis:** Programming tasks with detailed explanations
- **Educational Content:** Step-by-step tutorials and explanations
### Downstream Use
These models can be fine-tuned for:
- **Domain-Specific Reasoning:** Scientific, legal, or financial analysis
- **Specialized Content Generation:** Technical writing in specific fields
- **Interactive AI Assistants:** Conversational agents with adaptive thinking
- **Research Applications:** Academic writing and analysis tools
### Out-of-Scope Use
- **Factual Information Retrieval:** Should not be used as primary source for current events or factual data without verification
- **Safety-Critical Decisions:** Not intended for medical, legal, or safety-critical decision making without human oversight
## Bias, Risks, and Limitations
### Known Limitations
- **Training Data Bias:** May reflect biases present in training datasets
- **Context Length Constraints:** While optimized for long generation, context window limitations still apply
- **Reasoning Consistency:** Adaptive reasoning may produce different outputs for similar queries
### Recommendations
Users should be aware that:
- Models may exhibit biases from training data and should be evaluated for specific use cases
- Generated content should be fact-checked for accuracy, especially for specialized domains
- Performance may vary based on query complexity and available computational resources
- Regular evaluation and monitoring is recommended for production deployments
## How to Get Started with the Model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load model and tokenizer
model_name = "abhishekchohan/maesar-32B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Basic inference
prompt = "Explain the concept of test-time scaling in large language models:"
inputs = tokenizer(prompt, return_tensors="pt")
# Generate with adaptive thinking
with torch.no_grad():
outputs = model.generate(
**inputs,
max_length=2048,
temperature=0.7,
do_sample=True,
pad_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Training Details
### Training Data
The models were trained on a carefully curated dataset comprising:
- **High-Quality Text:** Diverse corpus of academic papers, technical documentation, and literature
- **Reasoning Examples:** Mathematical proofs, logical puzzles, and step-by-step problem solving
- **Code and Technical Content:** Programming examples with detailed explanations
- **Multilingual Sources:** English-focused with multilingual reasoning examples
### Training Procedure
#### Training Methodology
- **Test-Time Scaling Integration:** Novel training paradigm incorporating adaptive resource allocation
- **Budget Enforcement Learning:** Dynamic budget control during training phases
- **Multi-Stage Training:** Progressive complexity increases with budget adaptation
- **Autothinking Supervision:** Reinforcement learning for adaptive reasoning behavior
#### Training Hyperparameters
- **Training Regime:** Mixed precision (FP16/BF16) with gradient checkpointing
- **Optimizer:** AdamW with cosine learning rate schedule
- **Batch Size:** 32 (Maesar-8B), 16 (Maesar-32B)
- **Learning Rate:** 2e-4 (initial), with warmup and decay
- **Sequence Length:** Up to 65536 tokens during training
- **Budget Scaling Factor:** Adaptive (0.5x - 4x based on complexity)
#### Test-Time Scaling Efficiency
- **Computational Efficiency:** 4.2x improvement over baseline methods
- **Adaptive Resource Usage:** 56% reduction in reasoning tokens for simple queries
- **Performance Retention:** <2% accuracy degradation with budget optimization
## Technical Specifications
### Model Architecture and Objective
Both models implement a novel transformer architecture enhanced with:
- **Adaptive Reasoning Layers:** Specialized layers for dynamic thinking activation
- **Budget Control Mechanisms:** Hardware-aware computational resource management
- **Steering Vector Integration:** Activation-level guidance for reasoning patterns
- **Long Context Optimization:** Extended attention patterns for coherent long generation
### Base Model Specifications
**Maesar-8B (Based on DeepSeek-R1-0528-Qwen3-8B):**
- **Foundation:** Enhanced DeepSeek-R1 architecture with Qwen3 improvements
- **Context Window:** Extended context length support
- **Reasoning Capabilities:** Built-in step-by-step thinking patterns
**Maesar-32B (Based on QwQ-32B):**
- **Foundation:** Qwen-based Question with Question architecture
- **Advanced Reasoning:** Native question decomposition and analysis
- **Multilingual Support:** Enhanced multilingual reasoning capabilities
### Compute Infrastructure
#### Hardware Requirements
**Minimum Requirements (Maesar-4B):**
- **GPU Memory:** 12GB VRAM (FP16)
- **System Memory:** 24GB RAM
- **Storage:** 12GB available space
**Minimum Requirements (Maesar-8B):**
- **GPU Memory:** 16GB VRAM (FP16)
- **System Memory:** 32GB RAM
- **Storage:** 20GB available space
**Recommended (Maesar-8B):**
- **GPU:** RTX 4090, A100, or H100
- **GPU Memory:** 24GB+ VRAM
- **System Memory:** 64GB RAM
**Minimum Requirements (Maesar-32B):**
- **GPU Memory:** 64GB VRAM (FP16) or multi-GPU setup
- **System Memory:** 128GB RAM
- **Storage:** 80GB available space
#### Software
- **Transformers:** ≥4.51.0
## Model Lineage
### Base Model Credits
**Maesar-4B:**
- **Base Model:** [Qwen/Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507)
- **Foundation Architecture:** Scaled reasoning from Qwen3-4B
- **Original Developers:** Qwen Team (Alibaba Cloud)
**Maesar-8B:**
- **Base Model:** [deepseek-ai/DeepSeek-R1-0528-Qwen3-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B)
- **Foundation Architecture:** DeepSeek-R1 with Qwen3 enhancements
- **Original Developers:** DeepSeek AI
**Maesar-32B:**
- **Base Model:** [Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B)
- **Foundation Architecture:** Qwen-based Question with Question reasoning
- **Original Developers:** Qwen Team (Alibaba Cloud)
## Acknowledgments
This work builds upon foundational research in test-time scaling, adaptive reasoning, and long-form generation. Special thanks to:
- **DeepSeek AI** for the DeepSeek-R1-0528-Qwen3-8B base model and pioneering work in reasoning models
- **Qwen Team (Alibaba Cloud)** for the QwQ-32B base model and advanced question-answering architectures
- The broader research community for advancing the field of efficient language model architectures
We gratefully acknowledge the contributions of these base models, which provided the foundational capabilities that we enhanced with test-time scaling and budget enforcement techniques.
|
kimakurbain803/blockassist-bc-marine_sharp_armadillo_1757433617
|
kimakurbain803
| 2025-09-09T16:00:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"marine sharp armadillo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:00:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- marine sharp armadillo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Pilot-3B-GGUF
|
mradermacher
| 2025-09-09T16:00:19Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:songff/GenerAlign",
"base_model:songff/Pilot-3B",
"base_model:quantized:songff/Pilot-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-09T15:29:00Z |
---
base_model: songff/Pilot-3B
datasets:
- songff/GenerAlign
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/songff/Pilot-3B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Pilot-3B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Pilot-3B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-GGUF/resolve/main/Pilot-3B.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-GGUF/resolve/main/Pilot-3B.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-GGUF/resolve/main/Pilot-3B.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-GGUF/resolve/main/Pilot-3B.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-GGUF/resolve/main/Pilot-3B.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-GGUF/resolve/main/Pilot-3B.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-GGUF/resolve/main/Pilot-3B.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-GGUF/resolve/main/Pilot-3B.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-GGUF/resolve/main/Pilot-3B.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-GGUF/resolve/main/Pilot-3B.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-GGUF/resolve/main/Pilot-3B.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Pilot-3B-GGUF/resolve/main/Pilot-3B.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Rootu/blockassist-bc-snorting_fleecy_goose_1757433563
|
Rootu
| 2025-09-09T16:00:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snorting fleecy goose",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T16:00:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snorting fleecy goose
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
shericashmanhtr/blockassist-bc-solitary_dense_scorpion_1757433588
|
shericashmanhtr
| 2025-09-09T16:00:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"solitary dense scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:59:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- solitary dense scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
alexiseeifl/blockassist-bc-fleecy_flapping_pigeon_1757433415
|
alexiseeifl
| 2025-09-09T15:57:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fleecy flapping pigeon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:57:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fleecy flapping pigeon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
metafacerunner/blockassist-bc-running_scaly_eagle_1757431478
|
metafacerunner
| 2025-09-09T15:56:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"running scaly eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:56:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- running scaly eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bsksisysisbss/blockassist-bc-galloping_scampering_cobra_1757433374
|
bsksisysisbss
| 2025-09-09T15:56:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"galloping scampering cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:56:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- galloping scampering cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
palmart111/blockassist-bc-armored_feline_capybara_1757433334
|
palmart111
| 2025-09-09T15:56:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored feline capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:56:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored feline capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
weruior/blockassist-bc-striped_aquatic_tiger_1757433344
|
weruior
| 2025-09-09T15:56:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"striped aquatic tiger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:55:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- striped aquatic tiger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
enrikhoxha421/blockassist-bc-burrowing_invisible_raven_1757433342
|
enrikhoxha421
| 2025-09-09T15:56:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"burrowing invisible raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:55:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- burrowing invisible raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
acidjp/blockassist-bc-pesty_extinct_prawn_1757430860
|
acidjp
| 2025-09-09T15:55:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:55:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yujingfeng/base_all_2
|
yujingfeng
| 2025-09-09T15:55:11Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"llama-factory",
"license:apache-2.0",
"region:us"
] | null | 2025-09-09T15:32:38Z |
---
license: apache-2.0
tags:
- llama-factory
---
|
dhisowyeioe85373/blockassist-bc-reptilian_arctic_lemur_1757433279
|
dhisowyeioe85373
| 2025-09-09T15:54:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"reptilian arctic lemur",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:54:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- reptilian arctic lemur
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ChenWu98/qwen_2.5_0.5b_sft_type_anneal_condition_split_1_from_637
|
ChenWu98
| 2025-09-09T15:54:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:ChenWu98/qwen_2.5_0.5b_sft_type_condition",
"base_model:finetune:ChenWu98/qwen_2.5_0.5b_sft_type_condition",
"endpoints_compatible",
"region:us"
] | null | 2025-09-09T15:54:34Z |
---
base_model: ChenWu98/qwen_2.5_0.5b_sft_type_condition
library_name: transformers
model_name: qwen_2.5_0.5b_sft_type_anneal_condition_split_1_from_637
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for qwen_2.5_0.5b_sft_type_anneal_condition_split_1_from_637
This model is a fine-tuned version of [ChenWu98/qwen_2.5_0.5b_sft_type_condition](https://huggingface.co/ChenWu98/qwen_2.5_0.5b_sft_type_condition).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/ugkjpbo0)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
MagicalAlchemist/bge-m3-Q8_0-GGUF
|
MagicalAlchemist
| 2025-09-09T15:54:52Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"llama-cpp",
"gguf-my-repo",
"base_model:BAAI/bge-m3",
"base_model:quantized:BAAI/bge-m3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-09T15:54:46Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- llama-cpp
- gguf-my-repo
license: mit
base_model: BAAI/bge-m3
---
# MagicalAlchemist/bge-m3-Q8_0-GGUF
This model was converted to GGUF format from [`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/BAAI/bge-m3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo MagicalAlchemist/bge-m3-Q8_0-GGUF --hf-file bge-m3-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo MagicalAlchemist/bge-m3-Q8_0-GGUF --hf-file bge-m3-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo MagicalAlchemist/bge-m3-Q8_0-GGUF --hf-file bge-m3-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo MagicalAlchemist/bge-m3-Q8_0-GGUF --hf-file bge-m3-q8_0.gguf -c 2048
```
|
maukluchoda/blockassist-bc-placid_stinky_buffalo_1757433244
|
maukluchoda
| 2025-09-09T15:54:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid stinky buffalo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:54:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid stinky buffalo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Rootu/blockassist-bc-snorting_fleecy_goose_1757433206
|
Rootu
| 2025-09-09T15:54:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snorting fleecy goose",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:54:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snorting fleecy goose
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sekirr/blockassist-bc-masked_tenacious_whale_1757433192
|
sekirr
| 2025-09-09T15:53:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:53:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aronlg/blockassist-bc-wiry_insectivorous_bat_1757433140
|
aronlg
| 2025-09-09T15:53:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry insectivorous bat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:53:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry insectivorous bat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Stasonelison/blockassist-bc-howling_powerful_aardvark_1757433158
|
Stasonelison
| 2025-09-09T15:53:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"howling powerful aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:53:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- howling powerful aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
laconadaomy/blockassist-bc-squeaky_invisible_mole_1757433183
|
laconadaomy
| 2025-09-09T15:53:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"squeaky invisible mole",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:53:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- squeaky invisible mole
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757433110
|
bah63843
| 2025-09-09T15:52:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:52:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
costiganreanna/blockassist-bc-marine_muscular_puma_1757433137
|
costiganreanna
| 2025-09-09T15:52:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"marine muscular puma",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:52:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- marine muscular puma
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sapwkbszbskospw/blockassist-bc-bold_scavenging_nightingale_1757433106
|
sapwkbszbskospw
| 2025-09-09T15:51:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bold scavenging nightingale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:51:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bold scavenging nightingale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
philipsyodavebbfs/blockassist-bc-insectivorous_pensive_bison_1757433077
|
philipsyodavebbfs
| 2025-09-09T15:51:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous pensive bison",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:51:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous pensive bison
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757432962
|
bah63843
| 2025-09-09T15:50:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:50:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1757431388
|
vwzyrraz7l
| 2025-09-09T15:48:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:48:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Rootu/blockassist-bc-snorting_fleecy_goose_1757432843
|
Rootu
| 2025-09-09T15:48:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snorting fleecy goose",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:48:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snorting fleecy goose
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
palmart111/blockassist-bc-armored_feline_capybara_1757432789
|
palmart111
| 2025-09-09T15:47:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored feline capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:46:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored feline capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
utter-project/SpireNoPseudo
|
utter-project
| 2025-09-09T15:46:35Z | 7 | 0 | null |
[
"safetensors",
"llama",
"arxiv:2503.10620",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-03-11T16:03:59Z |
---
license: cc-by-nc-4.0
---
# SpireLM
Spire is a 7B parameter decoder-only model with strong abilities in machine translation, automatic speech recognition, and speech translation. [SpireBase](https://huggingface.co/utter-project/SpireBase) was created by applying speech-centric continued pretraining to [TowerBase-7B-v0.1](https://huggingface.co/Unbabel/TowerBase-7B-v0.1), which was itself created by applying continued pretraining to [Llama 2](https://huggingface.co/meta-llama/Llama-2-7b).
## Model Checkpoints
We release our checkpoints through Hugging Face. All of our models can be loaded as `LlamaForCausalLM` instances, allowing inference to be performed with [vLLM](https://github.com/vllm-project/vllm). For further details on the models, check [the paper](https://arxiv.org/abs/2503.10620).
| Model | Path |
| ----- | ---- |
| SpireBase | [utter-project/SpireBase](https://huggingface.co/utter-project/SpireBase) |
| SpireFull | [utter-project/SpireFull](https://huggingface.co/utter-project/SpireFull) |
| SpireNoBlocks | [utter-project/SpireNoBlocks](https://huggingface.co/utter-project/SpireNoBlocks) |
| SpireNoPseudo | [utter-project/SpireNoBlocks](https://huggingface.co/utter-project/SpireNoPseudo) |
| TowerFull | [utter-project/TowerFull](https://huggingface.co/utter-project/TowerFull) |
## Tokenizing Speech
The core of our approach to speech is *discretization* - continuous speech signals are converted into sequences of tokens, which can then be processed alongside text. Our discretization system consists of a few steps:
1. HuBERT Large ([fairseq download](https://dl.fbaipublicfiles.com/hubert/hubert_large_ll60k.pt)) converts 16kHz .wav files into into a sequence of feature vectors, one for each 20ms frame. We use the representations from layer 22.
2. Our k-means model ([download](https://huggingface.co/utter-project/SpireKMeans/resolve/main/kmeans_model)) maps each frame to one of 5000 clusters.
3. The sequences of cluster IDs are deduplicated, such that consecutive frames with the same label are collapsed into a single token. This usually shortens the sequence length by about 30%.
The `spire` package implements this pipeline. Assuming you have downloaded both of these files, you can use it like so:
```
from datasets import load_dataset
from spire.dsus import Labeler
from spire.utils import fix_fleurs_path
fleurs = load_dataset("google/fleurs", "en_us")
wav = fix_fleurs_path(fleurs["validation"][29], "validation")
labeler = Labeler("hubert_large_ll60k.pt", "kmeans_model")
speech_tokens = labeler.label(wav)
print(speech_tokens)
```
The output will not be very readable, as it consists of a sequence of Unicode [private use area](https://en.wikipedia.org/wiki/Private_Use_Areas) characters. However, these characters are known to the Spire tokenizer and can be combined with text:
TODO: add ASR/ST examples with this sequence
## Reproducing our Inference Results
TODO: ducttape example
## Reproducing our Training
## Citation
If you use Spire, please cite our work:
```
@misc{spire,
title={From TOWER to SPIRE: Adding the Speech Modality to a Text-Only LLM},
author={Kshitij Ambilduke and Ben Peters and Sonal Sannigrahi and Anil Keshwani and Tsz Kin Lam and Bruno Martins and Marcely Zanon Boito and André F. T. Martins},
year={2025},
eprint={2503.10620},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.10620}
}
```
# Funding Information
<img src="https://cdn-uploads.huggingface.co/production/uploads/62262e19d36494a6f743a28d/HbzC1C-uHe25ewTy2wyoK.png" width=7% height=7%>
This is an output of the European Project UTTER (Unified Transcription and Translation for Extended Reality) funded by European Union’s Horizon Europe Research and Innovation programme under grant agreement number 101070631.
For more information please visit https://he-utter.eu/
|
oyshimimi50/blockassist-bc-alert_colorful_pigeon_1757432719
|
oyshimimi50
| 2025-09-09T15:45:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"alert colorful pigeon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:45:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- alert colorful pigeon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
utter-project/SpireBase
|
utter-project
| 2025-09-09T15:45:31Z | 9 | 3 | null |
[
"safetensors",
"llama",
"arxiv:2503.10620",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-03-11T15:42:09Z |
---
license: cc-by-nc-4.0
---
# SpireLM
Spire is a 7B parameter decoder-only model with strong abilities in machine translation, automatic speech recognition, and speech translation. [SpireBase](https://huggingface.co/utter-project/SpireBase) was created by applying speech-centric continued pretraining to [TowerBase-7B-v0.1](https://huggingface.co/Unbabel/TowerBase-7B-v0.1), which was itself created by applying continued pretraining to [Llama 2](https://huggingface.co/meta-llama/Llama-2-7b).
## Model Checkpoints
We release our checkpoints through Hugging Face. All of our models can be loaded as `LlamaForCausalLM` instances, allowing inference to be performed with [vLLM](https://github.com/vllm-project/vllm). For further details on the models, check [the paper](https://arxiv.org/abs/2503.10620).
| Model | Path |
| ----- | ---- |
| SpireBase | [utter-project/SpireBase](https://huggingface.co/utter-project/SpireBase) |
| SpireFull | [utter-project/SpireFull](https://huggingface.co/utter-project/SpireFull) |
| SpireNoBlocks | [utter-project/SpireNoBlocks](https://huggingface.co/utter-project/SpireNoBlocks) |
| SpireNoPseudo | [utter-project/SpireNoBlocks](https://huggingface.co/utter-project/SpireNoPseudo) |
| TowerFull | [utter-project/TowerFull](https://huggingface.co/utter-project/TowerFull) |
## Tokenizing Speech
The core of our approach to speech is *discretization* - continuous speech signals are converted into sequences of tokens, which can then be processed alongside text. Our discretization system consists of a few steps:
1. HuBERT Large ([fairseq download](https://dl.fbaipublicfiles.com/hubert/hubert_large_ll60k.pt)) converts 16kHz .wav files into into a sequence of feature vectors, one for each 20ms frame. We use the representations from layer 22.
2. Our k-means model ([download](https://huggingface.co/utter-project/SpireKMeans/resolve/main/kmeans_model)) maps each frame to one of 5000 clusters.
3. The sequences of cluster IDs are deduplicated, such that consecutive frames with the same label are collapsed into a single token. This usually shortens the sequence length by about 30%.
The `spire` package implements this pipeline. Assuming you have downloaded both of these files, you can use it like so:
```
from datasets import load_dataset
from spire.dsus import Labeler
from spire.utils import fix_fleurs_path
fleurs = load_dataset("google/fleurs", "en_us")
wav = fix_fleurs_path(fleurs["validation"][29], "validation")
labeler = Labeler("hubert_large_ll60k.pt", "kmeans_model")
speech_tokens = labeler.label(wav)
print(speech_tokens)
```
The output will not be very readable, as it consists of a sequence of Unicode [private use area](https://en.wikipedia.org/wiki/Private_Use_Areas) characters. However, these characters are known to the Spire tokenizer and can be combined with text:
TODO: add ASR/ST examples with this sequence
## Reproducing our Inference Results
TODO: ducttape example
## Reproducing our Training
## Citation
If you use Spire, please cite our work:
```
@misc{spire,
title={From TOWER to SPIRE: Adding the Speech Modality to a Text-Only LLM},
author={Kshitij Ambilduke and Ben Peters and Sonal Sannigrahi and Anil Keshwani and Tsz Kin Lam and Bruno Martins and Marcely Zanon Boito and André F. T. Martins},
year={2025},
eprint={2503.10620},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.10620}
}
```
# Funding Information
<img src="https://cdn-uploads.huggingface.co/production/uploads/62262e19d36494a6f743a28d/HbzC1C-uHe25ewTy2wyoK.png" width=7% height=7%>
This is an output of the European Project UTTER (Unified Transcription and Translation for Extended Reality) funded by European Union’s Horizon Europe Research and Innovation programme under grant agreement number 101070631.
For more information please visit https://he-utter.eu/
|
kittygirlhere/blockassist-bc-twitchy_beaked_coral_1757432657
|
kittygirlhere
| 2025-09-09T15:44:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"twitchy beaked coral",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:44:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- twitchy beaked coral
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
karthickhere/blockassist-bc-voracious_quiet_bear_1757432655
|
karthickhere
| 2025-09-09T15:44:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"voracious quiet bear",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:44:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- voracious quiet bear
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757432633
|
bah63843
| 2025-09-09T15:44:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:44:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
denbyserahobey/blockassist-bc-regal_shiny_capybara_1757432647
|
denbyserahobey
| 2025-09-09T15:44:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal shiny capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:44:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal shiny capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Lennard-Heuer/Trained_LLM_Task2_2025_9_10
|
Lennard-Heuer
| 2025-09-09T15:44:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-09T15:43:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NotoriousH2/Qwen3-4B-Instruct-2507-Rude-LORA_Rude_LoRA
|
NotoriousH2
| 2025-09-09T15:44:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-09T15:44:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
navymarsmotby/blockassist-bc-chattering_iridescent_albatross_1757432619
|
navymarsmotby
| 2025-09-09T15:43:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"chattering iridescent albatross",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:43:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- chattering iridescent albatross
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757432501
|
bah63843
| 2025-09-09T15:42:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:42:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Papaperez/Qwen3-0.6B-Gensyn-Swarm-wise_crested_cat
|
Papaperez
| 2025-09-09T15:42:06Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am wise_crested_cat",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-04T21:03:57Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am wise_crested_cat
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
andrewwentzel-epsilon/Qwen2.5-7B-Instruct-sft-Q4_K_M-GGUF
|
andrewwentzel-epsilon
| 2025-09-09T15:40:06Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"base_model:andrewwentzel-epsilon/Qwen2.5-7B-Instruct-sft",
"base_model:quantized:andrewwentzel-epsilon/Qwen2.5-7B-Instruct-sft",
"endpoints_compatible",
"region:us"
] | null | 2025-09-09T15:39:41Z |
---
library_name: transformers
tags:
- trl
- sft
- llama-cpp
- gguf-my-repo
base_model: andrewwentzel-epsilon/Qwen2.5-7B-Instruct-sft
---
# andrewwentzel-epsilon/Qwen2.5-7B-Instruct-sft-Q4_K_M-GGUF
This model was converted to GGUF format from [`andrewwentzel-epsilon/Qwen2.5-7B-Instruct-sft`](https://huggingface.co/andrewwentzel-epsilon/Qwen2.5-7B-Instruct-sft) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/andrewwentzel-epsilon/Qwen2.5-7B-Instruct-sft) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo andrewwentzel-epsilon/Qwen2.5-7B-Instruct-sft-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-sft-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo andrewwentzel-epsilon/Qwen2.5-7B-Instruct-sft-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-sft-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo andrewwentzel-epsilon/Qwen2.5-7B-Instruct-sft-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-sft-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo andrewwentzel-epsilon/Qwen2.5-7B-Instruct-sft-Q4_K_M-GGUF --hf-file qwen2.5-7b-instruct-sft-q4_k_m.gguf -c 2048
```
|
karthickhere/blockassist-bc-voracious_quiet_bear_1757432332
|
karthickhere
| 2025-09-09T15:39:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"voracious quiet bear",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:39:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- voracious quiet bear
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Lennard-Heuer/results
|
Lennard-Heuer
| 2025-09-09T15:38:43Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3.1-8B-Instruct",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] |
text-generation
| 2025-09-09T15:37:41Z |
---
library_name: peft
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
tags:
- base_model:adapter:meta-llama/Meta-Llama-3.1-8B-Instruct
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 60
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.17.1
- Transformers 4.56.0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
weruior/blockassist-bc-meek_trotting_bat_1757432292
|
weruior
| 2025-09-09T15:38:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek trotting bat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:38:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek trotting bat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
viatlov/blockassist-bc-masked_amphibious_donkey_1757432203
|
viatlov
| 2025-09-09T15:38:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked amphibious donkey",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:37:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked amphibious donkey
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757432212
|
bah63843
| 2025-09-09T15:37:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:37:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
reyq007/blockassist-bc-miniature_sprightly_fly_1757431388
|
reyq007
| 2025-09-09T15:36:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"miniature sprightly fly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:35:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- miniature sprightly fly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gtallec-kog/Llama3.2-1B-ARC-ft-lr2e-4-r16
|
gtallec-kog
| 2025-09-09T15:36:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-09T15:36:15Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sivakrishna123/my-jarvis-4bit-GGUF
|
sivakrishna123
| 2025-09-09T15:36:23Z | 1,912 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"gpt2",
"en",
"base_model:openai-community/gpt2",
"base_model:quantized:openai-community/gpt2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-04T14:12:05Z |
---
base_model: openai-community/gpt2
tags:
- text-generation-inference
- transformers
- gpt2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sivakrishna123
- **License:** apache-2.0
- **Finetuned from model :** openai-community/gpt2
This gpt2 model was trained 2x faster with Huggingface's TRL library.
|
omerbkts/blockassist-bc-keen_fast_giraffe_1757432102
|
omerbkts
| 2025-09-09T15:36:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:35:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
khaliqabdull/humanizer3.0-lora
|
khaliqabdull
| 2025-09-09T15:35:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-09T15:35:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Stasonelison/blockassist-bc-howling_powerful_aardvark_1757432107
|
Stasonelison
| 2025-09-09T15:35:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"howling powerful aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:35:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- howling powerful aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Rootu/blockassist-bc-snorting_fleecy_goose_1757432095
|
Rootu
| 2025-09-09T15:35:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snorting fleecy goose",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:35:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snorting fleecy goose
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
HANNAHANNUS/GenAi
|
HANNAHANNUS
| 2025-09-09T15:35:05Z | 0 | 0 | null |
[
"text-generation",
"en",
"region:us"
] |
text-generation
| 2024-07-16T03:24:01Z |
---
language: en
tags:
- text-generation
pipeline_tag: text-generation
---
# GenAi
This is my model uploaded by Hannath M.A.
It is designed for **text generation** tasks.
## Usage
```python
from transformers import pipeline
generator = pipeline("text-generation", model="HANNAHANNUS/GenAi")
print(generator("Hello, my name is Hannath and I am")[0]['generated_text'])
|
Lennard-Heuer/results-qlora
|
Lennard-Heuer
| 2025-09-09T15:34:40Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Meta-Llama-3.1-8B-Instruct",
"lora",
"transformers",
"text-generation",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] |
text-generation
| 2025-09-09T15:27:04Z |
---
library_name: peft
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
tags:
- base_model:adapter:meta-llama/Meta-Llama-3.1-8B-Instruct
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: results-qlora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results-qlora
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3
### Framework versions
- PEFT 0.17.1
- Transformers 4.56.0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
Almaan/finetuned_qwen_tokenizer
|
Almaan
| 2025-09-09T15:33:47Z | 0 | 0 |
transformers
|
[
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-09T15:33:44Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
seams01/blockassist-bc-insectivorous_stubby_snake_1757430441
|
seams01
| 2025-09-09T15:32:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous stubby snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:32:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous stubby snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757431919
|
bah63843
| 2025-09-09T15:32:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:32:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aronlg/blockassist-bc-wiry_insectivorous_bat_1757431904
|
aronlg
| 2025-09-09T15:32:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry insectivorous bat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:32:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry insectivorous bat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
foridaparvin76474/blockassist-bc-skittish_vigilant_impala_1757431953
|
foridaparvin76474
| 2025-09-09T15:32:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"skittish vigilant impala",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:32:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- skittish vigilant impala
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gtallec-kog/Llama3.2-1B-ARC-ft-lr5e-5-r16
|
gtallec-kog
| 2025-09-09T15:32:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-09T15:31:55Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hagenbaughpaulita/blockassist-bc-snappy_sedate_hedgehog_1757431920
|
hagenbaughpaulita
| 2025-09-09T15:32:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snappy sedate hedgehog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:32:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snappy sedate hedgehog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
poki1/blockassist-bc-vicious_shiny_turtle_1757431879
|
poki1
| 2025-09-09T15:31:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious shiny turtle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:31:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious shiny turtle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jammerbop/blockassist-bc-foxy_aquatic_baboon_1757431856
|
jammerbop
| 2025-09-09T15:31:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"foxy aquatic baboon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:30:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- foxy aquatic baboon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mistie4525/blockassist-bc-hairy_sprightly_puffin_1757431861
|
mistie4525
| 2025-09-09T15:31:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy sprightly puffin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:31:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy sprightly puffin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
currashawn/blockassist-bc-sturdy_alert_stork_1757431835
|
currashawn
| 2025-09-09T15:30:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sturdy alert stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:30:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sturdy alert stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cxtrazw/crop-reco-final-zim
|
cxtrazw
| 2025-09-09T15:30:42Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-09T15:30:41Z |
---
license: apache-2.0
---
|
bah63843/blockassist-bc-plump_fast_antelope_1757431776
|
bah63843
| 2025-09-09T15:30:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:30:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sekirr/blockassist-bc-masked_tenacious_whale_1757431775
|
sekirr
| 2025-09-09T15:30:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:30:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rodrigoburgd/blockassist-bc-scruffy_untamed_hare_1757431775
|
rodrigoburgd
| 2025-09-09T15:29:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scruffy untamed hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:29:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scruffy untamed hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ghost613/VC-MJY_Woman_40s-0_preprocessed-12
|
ghost613
| 2025-09-09T15:29:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-06T08:09:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
suopwuy/blockassist-bc-colorful_marine_alpaca_1757431719
|
suopwuy
| 2025-09-09T15:29:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful marine alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:28:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful marine alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sonnechet/blockassist-bc-webbed_pesty_mallard_1757431701
|
sonnechet
| 2025-09-09T15:29:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"webbed pesty mallard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:29:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- webbed pesty mallard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
palmart111/blockassist-bc-armored_feline_capybara_1757431280
|
palmart111
| 2025-09-09T15:27:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored feline capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:21:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored feline capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DeusImperator/Valkyrie-49B-v2_exl3_4.0bpw_H6
|
DeusImperator
| 2025-09-09T15:27:33Z | 0 | 0 | null |
[
"safetensors",
"nemotron-nas",
"custom_code",
"base_model:TheDrummer/Valkyrie-49B-v2",
"base_model:quantized:TheDrummer/Valkyrie-49B-v2",
"4-bit",
"exl3",
"region:us"
] | null | 2025-09-09T14:51:06Z |
---
base_model:
- TheDrummer/Valkyrie-49B-v2
---
# Valkyrie-49B-v2 - EXL3 4.0bpw H6
This is a 4bpw EXL3 quant of [TheDrummer/Valkyrie-49B-v2](https://huggingface.co/TheDrummer/Valkyrie-49B-v2)
This quant was made using exllamav3-0.0.6 with '--cal_cols 4096' (instead of default 2048) which in my experience improves quant quality a bit
It fits in 32GB VRAM on Windows with over 20k context
I tested this quant shortly in some random RPs and tasks (including ones over 8k and 16k context) and it seems to work fine
## Prompt Templates
Uses Llama 3 Instruct format.
### Original readme below
---
# Join our Discord! https://discord.gg/BeaverAI
## More than 7000 members strong 💪 A hub for users and makers alike!
---
## Drummer is open for work / employment (I'm a Software Engineer). Contact me through any of these channels: https://linktr.ee/thelocaldrummer
### Thank you to everyone who subscribed through [Patreon](https://www.patreon.com/TheDrummer). Your support helps me chug along in this brave new world.
### FAQ for those out-of-the-loop
<details>
<summary>🐶 Who is Drummer?</summary>
Hi! I'm Drummer. I'm a Software Engineer with experience in JavaScript, Golang, Python, and generally engineering the crap out of things.
Why I'm in the AI space:
- **Exploration:** Everyone is trying to figure out how AI works and what it's capable of. I am too - just not in creating the smartest, safest model at all costs.
- **Upskill:** The world is headed towards AI. It is here to stay. This has been my way of brushing up in this new form of computing challenge.
- **Value:** I yearn to create value. I feel satisfaction and fulfillment in providing something meaningful for others.
- **Fun:** It's just fun using and making models. It's also fun coming up with theories and realizing them in practice (training AI).
I started my tuning venture back in mid-2024 when I wanted to improve its literary capabilities.
I've come a long way since then and I have branched out and specialized.
Foundational models today are optimized for non-creative uses, and I believe there is a place for AI in creativity and entertainment.
I am here to take *the road less traveled by*.
</details>
<details>
<summary>❓ What are my models like?</summary>
**Bottomline:** My models are usually geared towards creativity, usability, and entertainment!
While intelligence, correctness, and problem solving are not my priority, they are still one of many qualities I want in my models.
The primary goal is to enhance the experience for users looking to use models for creative uses, and other use cases which require no alignment.
In an effort to make it clear to myself and to others what I'm aiming for, I've identified certain qualities that my users often want:
Creativity
- **Writing:** Does it string together words and sentences in a pleasant & effective way? Does it feel like a writer?
- **Dynamism:** How good is the AI at being compelling and intriguing in its storytelling?
- **Imagination:** Can the AI navigate through a plethora of possibilities? Can it skirt incoherence and rise up to absolute coherence at the end of it?
(Dis)alignment
- **Attitude:** Does it refuse in both soft or hard ways? Does it lean towards certain corporate/religious/political ethics & beliefs? How does it see the user and itself?
- **Morality:** Does it know ethics? Is its language infected with forced positivity? If not, can it still moralize over difficult & dubious themes?
- **Formatting:** How stubborn is it with its established formatting? Can it create effective and novel formats to answer the prompt?
Intelligence
- **Adherence:** Can it follow instructions? Is it sticking to the prompt? Can it understsand you?
- **Knowledge:** Does it know about the world in both fictional and non-fictional way?
- **Perception:** Can it handle nuance, complexity, and logic?
If it doesn't excel in one of these qualities, or if it's overall mediocre for its size, then I would most likely reiterate until I get something right.
</details>
<details>
<summary>💡 Philosophy</summary>
A person is defined by the language they use. Not whether they speak in English or German, but in how they perceive reality.
Just like how we associate a serial killer as a mind that can't map 'murder' to 'evil', an innocent person is a mind that simply can't imagine 'murder'. They get confused when forced to deal with such subjects.
AI's use of language speaks volumes about their 'perception' of reality. If a language model has been skewed and limited to a positive perception, then it's ability to imagine is also limited.
Finetuning is an opportunity to adjust and broaden the language. Corporations use it to achieve safety and compliance. I'm here to
</details>
---
[Drummer](https://huggingface.co/TheDrummer) proudly presents...
# Valkyrie 49B v2 🚁

## Usage
- Llama 3 Chat
- Capable of reasoning like the base model
## Description
> This model is quite good, as a "49b". AI characters are giving quite life-like responses and reactions, with good understanding of complicated concepts.
> I can definitely confirm that the writing style is very good. I've been playing with this all afternoon and am looking forward to an imatrix quant of it. The fact that the reasoning capabilities are preserved is a big plus. They seem to really enhance the quality of the responses if you force the \<think\> token.
> Has good character adherence to a variety of different archetypes, pretty good situation adherence and reacts well to sys commands.
## Links
- Original: https://huggingface.co/TheDrummer/Valkyrie-49B-v2
- GGUF: https://huggingface.co/TheDrummer/Valkyrie-49B-v2-GGUF
- iMatrix (recommended): https://huggingface.co/bartowski/TheDrummer_Valkyrie-49B-v2-GGUF
- EXL3: https://huggingface.co/ArtusDev/TheDrummer_Valkyrie-49B-v2-EXL3
## Special Thanks
`config-v2f`
|
nikilr/Llama3.1-8B-pap_train_v2
|
nikilr
| 2025-09-09T15:27:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-09T15:26:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Apollo-1-8B-GGUF
|
mradermacher
| 2025-09-09T15:27:01Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"en",
"fr",
"pt",
"de",
"ro",
"sv",
"da",
"bg",
"ru",
"cs",
"el",
"uk",
"es",
"nl",
"sk",
"hr",
"pl",
"lt",
"nb",
"nn",
"fa",
"sl",
"gu",
"lv",
"it",
"oc",
"ne",
"mr",
"be",
"sr",
"lb",
"vec",
"as",
"cy",
"szl",
"ast",
"hne",
"awa",
"mai",
"bho",
"sd",
"ga",
"fo",
"hi",
"pa",
"bn",
"or",
"tg",
"yi",
"lmo",
"lij",
"scn",
"fur",
"sc",
"gl",
"ca",
"is",
"sq",
"li",
"prs",
"af",
"mk",
"si",
"ur",
"mag",
"bs",
"hy",
"zh",
"yue",
"my",
"ar",
"he",
"mt",
"id",
"ms",
"tl",
"ceb",
"jv",
"su",
"min",
"ban",
"pag",
"ilo",
"war",
"ta",
"te",
"kn",
"ml",
"tr",
"az",
"uz",
"kk",
"ba",
"tt",
"th",
"lo",
"fi",
"et",
"hu",
"vi",
"km",
"ja",
"ko",
"ka",
"eu",
"ht",
"pap",
"kea",
"tpi",
"sw",
"base_model:NoemaResearch/Apollo-1-8B",
"base_model:quantized:NoemaResearch/Apollo-1-8B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-09T15:06:10Z |
---
base_model: NoemaResearch/Apollo-1-8B
language:
- en
- fr
- pt
- de
- ro
- sv
- da
- bg
- ru
- cs
- el
- uk
- es
- nl
- sk
- hr
- pl
- lt
- nb
- nn
- fa
- sl
- gu
- lv
- it
- oc
- ne
- mr
- be
- sr
- lb
- vec
- as
- cy
- szl
- ast
- hne
- awa
- mai
- bho
- sd
- ga
- fo
- hi
- pa
- bn
- or
- tg
- yi
- lmo
- lij
- scn
- fur
- sc
- gl
- ca
- is
- sq
- li
- prs
- af
- mk
- si
- ur
- mag
- bs
- hy
- zh
- yue
- my
- ar
- he
- mt
- id
- ms
- tl
- ceb
- jv
- su
- min
- ban
- pag
- ilo
- war
- ta
- te
- kn
- ml
- tr
- az
- uz
- kk
- ba
- tt
- th
- lo
- fi
- et
- hu
- vi
- km
- ja
- ko
- ka
- eu
- ht
- pap
- kea
- tpi
- sw
library_name: transformers
license: other
license_link: https://huggingface.co/apexion-ai/Nous-V1-8B/blob/main/LICENSE.md
license_name: anvdl-1.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/NoemaResearch/Apollo-1-8B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Apollo-1-8B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Apollo-1-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Apollo-1-8B-GGUF/resolve/main/Apollo-1-8B.Q2_K.gguf) | Q2_K | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Apollo-1-8B-GGUF/resolve/main/Apollo-1-8B.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Apollo-1-8B-GGUF/resolve/main/Apollo-1-8B.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Apollo-1-8B-GGUF/resolve/main/Apollo-1-8B.Q3_K_L.gguf) | Q3_K_L | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Apollo-1-8B-GGUF/resolve/main/Apollo-1-8B.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Apollo-1-8B-GGUF/resolve/main/Apollo-1-8B.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Apollo-1-8B-GGUF/resolve/main/Apollo-1-8B.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Apollo-1-8B-GGUF/resolve/main/Apollo-1-8B.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Apollo-1-8B-GGUF/resolve/main/Apollo-1-8B.Q5_K_M.gguf) | Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Apollo-1-8B-GGUF/resolve/main/Apollo-1-8B.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Apollo-1-8B-GGUF/resolve/main/Apollo-1-8B.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Apollo-1-8B-GGUF/resolve/main/Apollo-1-8B.f16.gguf) | f16 | 16.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
duppbuy/blockassist-bc-prowling_rugged_capybara_1757431532
|
duppbuy
| 2025-09-09T15:25:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"prowling rugged capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:25:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- prowling rugged capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pepijn223/pi0_droid_fp32
|
pepijn223
| 2025-09-09T15:25:26Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-09-09T15:24:58Z |
# PI0 Pi0 Droid (PyTorch, 32-bit floating point)
This is a PyTorch version of the PI0 pi0_droid model, converted from the original JAX/Flax implementation.
## Model Details
- **Architecture**: PI0 (Vision-Language-Action model)
- **Model Type**: PI0
- **Domain**: DROID (robotic manipulation)
- **Precision**: 32-bit floating point (fp32)
- **Action Dimension**: 32
- **Action Horizon**: 10
- **Max Token Length**: 48
- **Vision Model**: PaliGemma (gemma_2b)
- **Action Expert**: gemma_300m
## Key Features
- **Vision-Language-Action**: Multimodal model combining vision, language, and action
- **PaliGemma Backbone**: Leverages PaliGemma for vision-language understanding
- **Continuous State Input**: Direct continuous state input processing
## Conversion Details
This model was converted from JAX to PyTorch using the OpenPI conversion script:
```bash
python examples/convert_jax_model_to_pytorch.py \
--checkpoint_dir /fsx/pepijn/pi0_droid \
--config_name pi0_droid \
--output_path /fsx/pepijn/pi0_droid/pytorch/fp32/ \
--precision float32
```
**Conversion Date**: 2025-09-09
## Usage
```python
from openpi.models_pytorch.pi0_pytorch import PI0Pytorch
import torch
# Load the model
model = PI0Pytorch.from_pretrained("pepijn223/pi0_droid_fp32")
# The model expects inputs in the format:
# - images: torch.Tensor of shape [batch, height, width, channels]
# - text: tokenized text prompts
# - proprioceptive_state: robot state information (if applicable)
```
## Model Architecture
The model consists of:
1. **Vision Encoder**: PaliGemma-based vision processing
2. **Language Encoder**: Text prompt understanding
3. **Action Expert**: Specialized network for action prediction
4. **Integration Layer**: Combines multimodal information for action output
## Training Data
This model was trained on robotics datasets appropriate for its domain:
- **DROID models**: Trained on diverse robot manipulation data
- **ALOHA models**: Trained on bimanual manipulation tasks
- **LIBERO models**: Trained on diverse tabletop manipulation scenarios
- **Base models**: Trained on general robotics datasets
## Limitations
- Model performance depends on similarity between deployment and training environments
- May require domain-specific fine-tuning for optimal performance
- Action space must match the trained action dimension (32)
## Citation
If you use this model, please cite the original OpenPI work:
```bibtex
@article{openpi2024,
title={Open-World Robotic Manipulation with Vision-Language-Action Models},
author={Physical Intelligence},
year={2024},
url={https://github.com/Physical-Intelligence/openpi}
}
```
## Original Repository
[OpenPI GitHub Repository](https://github.com/Physical-Intelligence/openpi)
## License
This model follows the same license as the original OpenPI repository.
|
andrewwentzel-epsilon/Qwen2.5-7B-Instruct-sft
|
andrewwentzel-epsilon
| 2025-09-09T15:25:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-09T15:16:56Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pepijn223/pi05_libero_bf16
|
pepijn223
| 2025-09-09T15:24:45Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-09-09T15:24:33Z |
# PI0.5 Pi05 Libero (PyTorch, 16-bit floating point)
This is a PyTorch version of the PI0.5 pi05_libero model, converted from the original JAX/Flax implementation.
## Model Details
- **Architecture**: PI0.5 (Vision-Language-Action model with discrete state input)
- **Model Type**: PI0.5
- **Domain**: LIBERO (diverse manipulation tasks)
- **Precision**: 16-bit floating point (bf16)
- **Action Dimension**: 32
- **Action Horizon**: 10
- **Max Token Length**: 200
- **Vision Model**: PaliGemma (gemma_2b)
- **Action Expert**: gemma_300m
## Key Features
- **Discrete State Input**: Uses discrete language tokens for state representation
- **Flow Matching**: Utilizes adaRMSNorm for timestep injection in action expert
- **Enhanced Action Modeling**: Improved action prediction with flow matching approach
## Conversion Details
This model was converted from JAX to PyTorch using the OpenPI conversion script:
```bash
python examples/convert_jax_model_to_pytorch.py \
--checkpoint_dir /fsx/pepijn/pi05_base \
--config_name pi05_libero \
--output_path /fsx/pepijn/pi05_base/pytorch/bf16/ \
--precision bfloat16
```
**Conversion Date**: 2025-09-09
## Usage
```python
from openpi.models_pytorch.pi0_pytorch import PI0Pytorch
import torch
# Load the model
model = PI0Pytorch.from_pretrained("pepijn223/pi05_libero_bf16")
# The model expects inputs in the format:
# - images: torch.Tensor of shape [batch, height, width, channels]
# - text: tokenized text prompts
# - proprioceptive_state: robot state information (if applicable)
```
## Model Architecture
The model consists of:
1. **Vision Encoder**: PaliGemma-based vision processing
2. **Language Encoder**: Text prompt understanding
3. **Action Expert**: Specialized network for action prediction
4. **Integration Layer**: Combines multimodal information for action output
## Training Data
This model was trained on robotics datasets appropriate for its domain:
- **DROID models**: Trained on diverse robot manipulation data
- **ALOHA models**: Trained on bimanual manipulation tasks
- **LIBERO models**: Trained on diverse tabletop manipulation scenarios
- **Base models**: Trained on general robotics datasets
## Limitations
- Model performance depends on similarity between deployment and training environments
- May require domain-specific fine-tuning for optimal performance
- Action space must match the trained action dimension (32)
## Citation
If you use this model, please cite the original OpenPI work:
```bibtex
@article{openpi2024,
title={Open-World Robotic Manipulation with Vision-Language-Action Models},
author={Physical Intelligence},
year={2024},
url={https://github.com/Physical-Intelligence/openpi}
}
```
## Original Repository
[OpenPI GitHub Repository](https://github.com/Physical-Intelligence/openpi)
## License
This model follows the same license as the original OpenPI repository.
|
daronsantos/blockassist-bc-beaked_armored_cougar_1757431437
|
daronsantos
| 2025-09-09T15:24:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked armored cougar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-09T15:24:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked armored cougar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
SolGaze/Smoothie-Qwen3-1.7B-Gensyn-Swarm-galloping_barky_crane
|
SolGaze
| 2025-09-09T15:24:10Z | 175 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am galloping_barky_crane",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T12:14:32Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am galloping_barky_crane
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.