Datasets:
modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-07 06:34:03
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 544
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-07 06:33:46
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1757224041
|
calegpedia
| 2025-09-07T06:14:41 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T06:14:38 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
arif696/blockassist-bc-regal_spotted_pelican_1757222657
|
arif696
| 2025-09-07T05:25:45 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal spotted pelican",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T05:25:20 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal spotted pelican
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
arif696/blockassist-bc-regal_spotted_pelican_1757220898
|
arif696
| 2025-09-07T04:56:18 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal spotted pelican",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T04:56:08 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal spotted pelican
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
simhachalamk/blockassist-bc-agile_moist_anteater_1757218000
|
simhachalamk
| 2025-09-07T04:07:39 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"agile moist anteater",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T04:07:21 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- agile moist anteater
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1757215602
|
akirafudo
| 2025-09-07T03:27:34 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T03:26:57 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1757212467
|
calegpedia
| 2025-09-07T03:02:04 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T03:01:59 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
systbs/zarvan-checkpoints
|
systbs
| 2025-09-07T02:56:31 | 668 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-22T05:56:32 |
---
license: apache-2.0
---
|
mradermacher/Llama-2-7B-mono-Hausa-GGUF
|
mradermacher
| 2025-09-07T02:52:31 | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:almanach/Llama-2-7B-mono-Hausa",
"base_model:quantized:almanach/Llama-2-7B-mono-Hausa",
"endpoints_compatible",
"region:us"
] | null | 2025-09-07T01:51:38 |
---
base_model: almanach/Llama-2-7B-mono-Hausa
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/almanach/Llama-2-7B-mono-Hausa
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-2-7B-mono-Hausa-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7B-mono-Hausa-GGUF/resolve/main/Llama-2-7B-mono-Hausa.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7B-mono-Hausa-GGUF/resolve/main/Llama-2-7B-mono-Hausa.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7B-mono-Hausa-GGUF/resolve/main/Llama-2-7B-mono-Hausa.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7B-mono-Hausa-GGUF/resolve/main/Llama-2-7B-mono-Hausa.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7B-mono-Hausa-GGUF/resolve/main/Llama-2-7B-mono-Hausa.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7B-mono-Hausa-GGUF/resolve/main/Llama-2-7B-mono-Hausa.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7B-mono-Hausa-GGUF/resolve/main/Llama-2-7B-mono-Hausa.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7B-mono-Hausa-GGUF/resolve/main/Llama-2-7B-mono-Hausa.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7B-mono-Hausa-GGUF/resolve/main/Llama-2-7B-mono-Hausa.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7B-mono-Hausa-GGUF/resolve/main/Llama-2-7B-mono-Hausa.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7B-mono-Hausa-GGUF/resolve/main/Llama-2-7B-mono-Hausa.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7B-mono-Hausa-GGUF/resolve/main/Llama-2-7B-mono-Hausa.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
DiHe/llama_350m-2025-09-06-19-00-06-adamw-Uniform-350M-lr0_008-WD0_1
|
DiHe
| 2025-09-07T02:17:45 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-07T02:16:17 |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Danrisi/Qwen_Ultrareal
|
Danrisi
| 2025-09-07T02:09:42 | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-09-06T05:20:53 |
---
license: apache-2.0
---
|
seraphimzzzz/1328827
|
seraphimzzzz
| 2025-09-07T02:08:21 | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-07T02:08:22 |
[View on Civ Archive](https://civarchive.com/models/1265727?modelVersionId=1427492)
|
amethyst9/948754
|
amethyst9
| 2025-09-07T02:00:29 | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-07T02:00:34 |
[View on Civ Archive](https://civarchive.com/models/931438?modelVersionId=1042631)
|
seraphimzzzz/1347405
|
seraphimzzzz
| 2025-09-07T01:59:48 | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-07T01:59:48 |
[View on Civ Archive](https://civarchive.com/models/1281875?modelVersionId=1446252)
|
seraphimzzzz/1297736
|
seraphimzzzz
| 2025-09-07T01:49:15 | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-07T01:49:16 |
[View on Civ Archive](https://civarchive.com/models/1238117?modelVersionId=1395347)
|
crystalline7/1276605
|
crystalline7
| 2025-09-07T01:48:30 | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-07T01:48:30 |
[View on Civ Archive](https://civarchive.com/models/1218580?modelVersionId=1372751)
|
Lorg0n/hikka-forge-paraphrase-multilingual-MiniLM-L12-v2
|
Lorg0n
| 2025-09-07T01:42:23 | 180 | 2 |
sentence-transformers
|
[
"sentence-transformers",
"tensorboard",
"onnx",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"ukrainian",
"english",
"anime",
"hikka",
"generated_from_trainer",
"dataset_size:160039",
"loss:MultipleNegativesRankingLoss",
"hikka-forge",
"uk",
"en",
"arxiv:1908.10084",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:quantized:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-11T14:27:39 |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- ukrainian
- english
- anime
- hikka
- generated_from_trainer
- dataset_size:160039
- loss:MultipleNegativesRankingLoss
- hikka
- anime
- hikka-forge
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
widget:
- source_sentence: аніме про меланхолійну подорож після перемоги над королем демонів
sentences:
- 'Frieren: Beyond Journey''s End'
- >-
Під час своєї десятирічної подорожі з метою перемоги над Королем Демонів,
члени загону героя - сам Гіммель, священник Гайтер, гном-воїн Айзен...
- K-On!
- source_sentence: a calming, healing 'iyashikei' anime about girls camping
sentences:
- Дівчачий табір△
- Мій сусід Тоторо
- Атака Титанів
pipeline_tag: sentence-similarity
library_name: sentence-transformers
license: apache-2.0
language:
- uk
- en
---
# Hikka-Forge: Fine-tuned Multilingual Sentence Transformer for Anime Semantic Search (UA/EN)
This is a [sentence-transformers](https://www.SBERT.net) model fine-tuned from `sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2`. It is specifically trained to map Ukrainian and English sentences & paragraphs from the **anime domain** into a 384-dimensional dense vector space.
The model is designed for tasks such as semantic search, textual similarity, and clustering within an anime context. It excels at capturing not only direct keywords but also abstract concepts, genres, and the overall atmosphere of a title.
The training dataset was provided by [**hikka.io**](https://hikka.io), a comprehensive Ukrainian encyclopedia for anime, manga, and light novels.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** `sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2`
- **Languages:** Ukrainian (uk), English (en)
- **Fine-tuning Dataset:** Proprietary dataset from [hikka.io](https://hikka.io)
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
### Model Sources
- **Repository:** [This model on Hugging Face](https://huggingface.co/Lorg0n/hikka-forge-paraphrase-multilingual-MiniLM-L12-v2)
- **Original Model:** [paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2)
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
## Usage
First, install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then, you can load the model and use it for semantic search or similarity tasks.
```python
from sentence_transformers import SentenceTransformer, util
# Download the model from the 🤗 Hub
model = SentenceTransformer("Lorg0n/hikka-forge-paraphrase-multilingual-MiniLM-L12-v2")
# Example query (can be in Ukrainian or English)
query = "аніме про меланхолійну подорож після перемоги над королем демонів"
# "anime about a melancholic journey after defeating the demon king"
# A corpus of documents to search through
corpus = [
"Frieren is an elf mage who was part of the hero's party that defeated the Demon King. After the journey, she witnesses her human companions pass away due to old age and embarks on a new journey to understand humanity.",
"To Your Eternity follows an immortal being sent to Earth with no emotions nor identity. The being is able to take on the shape of those that leave a strong impression on it.",
"K-On! is a lighthearted story about four high school girls who join the light music club to save it from being disbanded. They spend their days practicing, performing, and hanging out together."
]
# Encode the query and corpus into dense vector embeddings
query_embedding = model.encode(query, convert_to_tensor=True)
corpus_embeddings = model.encode(corpus, convert_to_tensor=True)
# Compute cosine similarity scores
cosine_scores = util.cos_sim(query_embedding, corpus_embeddings)
# Print the results
print(f"Query: {query}\n")
for i, score in enumerate(cosine_scores[0]):
print(f"Similarity: {score:.4f}\t | Document: {corpus[i][:80]}...")
# Expected Output:
# Query: аніме про меланхолійну подорож після перемоги над королем демонів
#
# Similarity: 0.4013 | Document: Frieren is an elf mage who was part of the hero's party that defeated the Demon ...
# Similarity: 0.1800 | Document: To Your Eternity follows an immortal being sent to Earth with no emotions nor id...
# Similarity: 0.0091 | Document: K-On! is a lighthearted story about four high school girls who join the light mu...
```
## Training Details
### Training Dataset
The model was fine-tuned on a proprietary, high-quality dataset from **[hikka.io](https://hikka.io)**, consisting of **177,822** carefully constructed training pairs. The dataset was engineered to teach the model various semantic relationships within the anime domain:
1. **Cross-lingual Connections (UA ↔ EN):**
* Pairs of titles and their corresponding synopses in both languages (`ua_title` ↔ `en_synopsis`).
* Pairs of titles in Ukrainian and English (`ua_title` ↔ `en_title`).
* Pairs of translated genre names (`Бойовик` ↔ `Action`).
* Pairs from an auxiliary translated dataset to augment bilingual understanding.
2. **Intra-lingual Connections (UA ↔ UA, EN ↔ EN):**
* Pairs of key sentences (first, middle, last) from a synopsis with the full synopsis text. This teaches the model that a part is semantically related to the whole text.
3. **Metadata & Synonymy Injection:**
* Pairs linking all known titles of an anime (Ukrainian, English, Japanese, synonyms) to each other, teaching the model that they refer to the same entity.
* Pairs linking genres and studios to anime titles to ground the model in relevant metadata.
* **Loss Function:** The model was trained using `MultipleNegativesRankingLoss`, a highly effective method for learning semantic similarity. It utilizes other examples in a batch as negative samples, which is a very efficient training paradigm.
### Evaluation
The fine-tuned model demonstrates a significantly improved understanding of domain-specific and abstract concepts compared to the base model. During evaluation, it showed:
- **Superior understanding of niche genres:** It correctly identified "Yuru Camp" (Дівчачий табір) from the query `"a calming, healing 'iyashikei' anime"`, while the base model returned more generic results.
- **Grasping abstract concepts:** It correctly found "Magical Girl Site" for the query `"деконструкція жанру махо-шьоджьо, де дівчата-чарівниці страждають психологічно"` (deconstruction of the maho-shoujo genre where magical girls suffer psychologically).
- **Better atmospheric matching:** It showed higher similarity to thematically similar anime (like "Frieren" and "To Your Eternity") and lower similarity to dissimilar ones, proving a deeper contextual understanding.
### Training Hyperparameters
- `learning_rate`: 2e-05
- `per_device_train_batch_size`: 32
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `fp16`: True
- `loss`: MultipleNegativesRankingLoss
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
|
harmonyblevins/blockassist-bc-mute_reclusive_eel_1757208143
|
harmonyblevins
| 2025-09-07T01:23:42 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute reclusive eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T01:23:20 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute reclusive eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
HouraMor/wh-loraft-lr1e5-dtstf5-adm-ga1ba16-st15k-v2-evalstp10-pat20-trainvalch
|
HouraMor
| 2025-09-07T01:06:36 | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:HouraMor/wh-ft-lr5e6-dtstf5-adm-ga1ba16-st15k-v2-evalstp500-pat5",
"base_model:adapter:HouraMor/wh-ft-lr5e6-dtstf5-adm-ga1ba16-st15k-v2-evalstp500-pat5",
"license:apache-2.0",
"region:us"
] | null | 2025-09-06T12:10:10 |
---
library_name: peft
license: apache-2.0
base_model: HouraMor/wh-ft-lr5e6-dtstf5-adm-ga1ba16-st15k-v2-evalstp500-pat5
tags:
- generated_from_trainer
model-index:
- name: wh-loraft-lr1e5-dtstf5-adm-ga1ba16-st15k-v2-evalstp10-pat20-trainvalch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wh-loraft-lr1e5-dtstf5-adm-ga1ba16-st15k-v2-evalstp10-pat20-trainvalch
This model is a fine-tuned version of [HouraMor/wh-ft-lr5e6-dtstf5-adm-ga1ba16-st15k-v2-evalstp500-pat5](https://huggingface.co/HouraMor/wh-ft-lr5e6-dtstf5-adm-ga1ba16-st15k-v2-evalstp500-pat5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3701 | 0.0201 | 10 | 0.5664 |
| 0.3099 | 0.0402 | 20 | 0.5663 |
| 0.2067 | 0.0602 | 30 | 0.5667 |
| 0.3924 | 0.0803 | 40 | 0.5671 |
| 0.3036 | 0.1004 | 50 | 0.5667 |
| 0.2729 | 0.1205 | 60 | 0.5668 |
| 0.2433 | 0.1406 | 70 | 0.5664 |
| 0.3061 | 0.1606 | 80 | 0.5665 |
| 0.2996 | 0.1807 | 90 | 0.5668 |
| 0.3438 | 0.2008 | 100 | 0.5665 |
| 0.2893 | 0.2209 | 110 | 0.5666 |
| 0.2338 | 0.2410 | 120 | 0.5670 |
| 0.2841 | 0.2610 | 130 | 0.5667 |
| 0.2536 | 0.2811 | 140 | 0.5670 |
| 0.3586 | 0.3012 | 150 | 0.5672 |
| 0.3779 | 0.3213 | 160 | 0.5668 |
| 0.2853 | 0.3414 | 170 | 0.5669 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.3
- Pytorch 2.7.0+cu118
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Stasonelison/blockassist-bc-howling_powerful_aardvark_1757206870
|
Stasonelison
| 2025-09-07T01:01:58 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"howling powerful aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T01:01:46 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- howling powerful aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Stasonelison/blockassist-bc-howling_powerful_aardvark_1757206628
|
Stasonelison
| 2025-09-07T00:57:55 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"howling powerful aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T00:57:46 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- howling powerful aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
raihannabiil/blockassist-bc-humming_rugged_viper_1757201365
|
raihannabiil
| 2025-09-07T00:06:47 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"humming rugged viper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T00:06:42 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- humming rugged viper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-keen_fast_giraffe_1757203457
|
omerbektass
| 2025-09-07T00:04:39 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T00:04:35 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
capungmerah627/blockassist-bc-stinging_soaring_porcupine_1757201794
|
capungmerah627
| 2025-09-07T00:01:48 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging soaring porcupine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-07T00:01:45 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging soaring porcupine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Miracle-man/blockassist-bc-singing_lithe_koala_1757201171
|
Miracle-man
| 2025-09-06T23:56:31 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing lithe koala",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T23:56:28 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing lithe koala
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-dormant_strong_badger_1757202525
|
AnerYubo
| 2025-09-06T23:48:48 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant strong badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T23:48:46 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant strong badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RealTarz/review-insight-multi-business
|
RealTarz
| 2025-09-06T23:38:23 | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:roberta-base",
"lora",
"transformers",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2025-09-06T23:38:21 |
---
library_name: peft
license: mit
base_model: roberta-base
tags:
- base_model:adapter:roberta-base
- lora
- transformers
metrics:
- accuracy
- f1
model-index:
- name: review-insight-multi-business
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# review-insight-multi-business
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0495
- Accuracy: 0.9847
- F1: 0.9847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5363 | 1.0 | 939 | 0.1258 | 0.9612 | 0.9611 |
| 0.0632 | 2.0 | 1878 | 0.0704 | 0.9795 | 0.9794 |
| 0.0414 | 3.0 | 2817 | 0.0666 | 0.9801 | 0.9801 |
| 0.0358 | 4.0 | 3756 | 0.0495 | 0.9847 | 0.9847 |
### Framework versions
- PEFT 0.17.1
- Transformers 4.56.0
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.22.0
|
AnerYubo/blockassist-bc-fanged_camouflaged_cassowary_1757201374
|
AnerYubo
| 2025-09-06T23:29:37 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fanged camouflaged cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T23:29:34 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fanged camouflaged cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vendi11/blockassist-bc-placid_placid_llama_1757200703
|
vendi11
| 2025-09-06T23:19:05 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T23:19:02 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
parlange/mlp-mixer-gravit-b3
|
parlange
| 2025-09-06T23:15:11 | 0 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"vision-transformer",
"image-classification",
"mlp-mixer",
"gravitational-lensing",
"strong-lensing",
"astronomy",
"astrophysics",
"dataset:parlange/gravit-j24",
"arxiv:2509.00226",
"license:apache-2.0",
"model-index",
"region:us"
] |
image-classification
| 2025-09-06T23:14:20 |
---
license: apache-2.0
tags:
- vision-transformer
- image-classification
- pytorch
- timm
- mlp-mixer
- gravitational-lensing
- strong-lensing
- astronomy
- astrophysics
datasets:
- parlange/gravit-j24
metrics:
- accuracy
- auc
- f1
paper:
- title: "GraViT: A Gravitational Lens Discovery Toolkit with Vision Transformers"
url: "https://arxiv.org/abs/2509.00226"
authors: "Parlange et al."
model-index:
- name: MLP-Mixer-b3
results:
- task:
type: image-classification
name: Strong Gravitational Lens Discovery
dataset:
type: common-test-sample
name: Common Test Sample (More et al. 2024)
metrics:
- type: accuracy
value: 0.8118
name: Average Accuracy
- type: auc
value: 0.7460
name: Average AUC-ROC
- type: f1
value: 0.4426
name: Average F1-Score
---
# 🌌 mlp-mixer-gravit-b3
🔭 This model is part of **GraViT**: Transfer Learning with Vision Transformers and MLP-Mixer for Strong Gravitational Lens Discovery
🔗 **GitHub Repository**: [https://github.com/parlange/gravit](https://github.com/parlange/gravit)
## 🛰️ Model Details
- **🤖 Model Type**: MLP-Mixer
- **🧪 Experiment**: B3 - J24-all-blocks
- **🌌 Dataset**: J24
- **🪐 Fine-tuning Strategy**: all-blocks
## 💻 Quick Start
```python
import torch
import timm
# Load the model directly from the Hub
model = timm.create_model(
'hf-hub:parlange/mlp-mixer-gravit-b3',
pretrained=True
)
model.eval()
# Example inference
dummy_input = torch.randn(1, 3, 224, 224)
with torch.no_grad():
output = model(dummy_input)
predictions = torch.softmax(output, dim=1)
print(f"Lens probability: {predictions[0][1]:.4f}")
```
## ⚡️ Training Configuration
**Training Dataset:** J24 (Jaelani et al. 2024)
**Fine-tuning Strategy:** all-blocks
| 🔧 Parameter | 📝 Value |
|--------------|----------|
| Batch Size | 192 |
| Learning Rate | AdamW with ReduceLROnPlateau |
| Epochs | 100 |
| Patience | 10 |
| Optimizer | AdamW |
| Scheduler | ReduceLROnPlateau |
| Image Size | 224x224 |
| Fine Tune Mode | all_blocks |
| Stochastic Depth Probability | 0.1 |
## 📈 Training Curves

## 🏁 Final Epoch Training Metrics
| Metric | Training | Validation |
|:---------:|:-----------:|:-------------:|
| 📉 Loss | 0.0373 | 0.0536 |
| 🎯 Accuracy | 0.9863 | 0.9861 |
| 📊 AUC-ROC | 0.9989 | 0.9990 |
| ⚖️ F1 Score | 0.9863 | 0.9860 |
## ☑️ Evaluation Results
### ROC Curves and Confusion Matrices
Performance across all test datasets (a through l) in the Common Test Sample (More et al. 2024):












### 📋 Performance Summary
Average performance across 12 test datasets from the Common Test Sample (More et al. 2024):
| Metric | Value |
|-----------|----------|
| 🎯 Average Accuracy | 0.8118 |
| 📈 Average AUC-ROC | 0.7460 |
| ⚖️ Average F1-Score | 0.4426 |
## 📘 Citation
If you use this model in your research, please cite:
```bibtex
@misc{parlange2025gravit,
title={GraViT: Transfer Learning with Vision Transformers and MLP-Mixer for Strong Gravitational Lens Discovery},
author={René Parlange and Juan C. Cuevas-Tello and Octavio Valenzuela and Omar de J. Cabrera-Rosas and Tomás Verdugo and Anupreeta More and Anton T. Jaelani},
year={2025},
eprint={2509.00226},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.00226},
}
```
---
## Model Card Contact
For questions about this model, please contact the author through: https://github.com/parlange/
|
nkc98/strategy-classification-model
|
nkc98
| 2025-09-06T23:07:37 | 3 | 0 | null |
[
"safetensors",
"roberta",
"license:apache-2.0",
"region:us"
] | null | 2024-09-11T16:17:18 |
---
license: apache-2.0
---
|
cwayneconnor/blockassist-bc-mute_loud_lynx_1757199913
|
cwayneconnor
| 2025-09-06T23:07:14 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute loud lynx",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T23:06:43 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute loud lynx
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
BasedBase/WEBGEN-4B-Preview-480B-Double-Distill-FP32
|
BasedBase
| 2025-09-06T22:51:40 | 0 | 1 | null |
[
"safetensors",
"qwen3",
"distillation",
"qwen",
"causal-lm",
"webgen",
"distilled-model",
"small-model",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-06T22:45:01 |
---
language: en
license: apache-2.0
tags:
- distillation
- qwen
- causal-lm
- webgen
- distilled-model
- small-model
---
# WEBGEN-4B-Preview-480B-Double-Distill
## This is the unquantized version!
## Model Description
This model was created by distilling the Qwen3-Coder-480B Mixture-of-Experts (MoE) teacher model into the compact and efficient **Tesslate/WEBGEN-4B-Preview** base, merging the lora, then redistilling that distill.
The process is the same but with a second distillation run blending all experts instead of just 64 and used a lower DARE-TIES drop rate. This was done to capture more information than what can be captured in one distillation. You should notice fewer errors and more functional code. Remember to be specific with prompting. It is a small model after all.
The purpose of this distill is to make the Webgen-4B-Preview model gain more of the knowledge of the teacher model to improve its overall performance. This model should perform better for web design but it is still a 4B model
**It is reccomended to use bf16 as its still only 8gb and because small models are very sensitive to quantization. For optimal results be specific in your prompting and avoid vaugue ambiguous prompts like "Create a website for a taco restaurant". Instead use prompts like "Make a single-file landing page for "RasterFlow" (GPU video pipeline).
Style: modern tech, muted palette, Tailwind, rounded-xl, subtle gradients.
Sections: navbar, hero (big headline + 2 CTAs), logos row, features (3x cards),
code block (copyable), pricing (3 tiers), FAQ accordion, footer.
Constraints: semantic HTML, no external JS. Return ONLY the HTML code."**
## The Distillation Process: In-Depth
The creation of this model was achieved through a novel SVD-based distillation pipeline, designed specifically to tackle the unique challenge of transferring knowledge from a sparse MoE architecture to a dense one. The process ensures maximum fidelity by intelligently selecting and blending expert knowledge rather than using naive averaging.
The methodology can be broken down into five key stages:
### 1. Non-Linear Layer Mapping
A direct linear mapping of layers (e.g., student layer 10 from teacher layer 10) is suboptimal. This pipeline uses a **non-linear sigmoid mapping function** to align the student and teacher layers. This ensures that the critical first and last layers of both models are perfectly aligned, while the intermediate layers are mapped along a smooth curve. This preserves the hierarchical feature development of the teacher model.
For student layers that fall between two integer teacher layers, **Spherical Linear Interpolation (SLERP)** is used on the teacher's weights to create a smooth, interpolated "virtual teacher layer" that accurately represents the knowledge at that specific depth.
### 2. MoE-to-Dense MLP Synthesis
This is the most critical and computationally intensive part of the process. Brute-force averaging all 256 experts from the teacher's MoE block for every layer would be computationally prohibitive and would dilute the specialized knowledge of each expert. Instead, a highly efficient **two-pass intelligent selection method** was used:
#### Pass 1: Centroid Calculation
First, the script creates a "fingerprint" for each of the 256 teacher experts in a given layer. This fingerprint is a flattened vector representation of all of an expert's weights. To find the "center of gravity" of the layer's knowledge, the script calculates the mean of all 256 fingerprints. This is done with a memory-efficient running sum **entirely on the GPU** to avoid VRAM OOMs and CPU-to-GPU transfer bottlenecks. The resulting average fingerprint is called the **centroid**.
#### Pass 2: Intelligent Expert Selection
The script then iterates through the 256 experts a second time. For each expert, it calculates its fingerprint and measures its Euclidean distance to the centroid. The experts closest to the centroid are the most representative of the layer's collective knowledge. The script selects the **top 64 most representative experts** for blending.
#### Final Blending
Finally, only the weights of these selected 64 experts are loaded. Each expert's weights are projected down to the student model's dimensions using **Randomized SVD**. These projected tensors are then averaged together to create the final, synthetic dense MLP layer. This synthesized layer captures the core knowledge of the teacher's MoE block without the noise from less relevant experts.
### 3. Delta Calculation and Purification
Once a synthetic teacher tensor (for either an attention or MLP block) is created, it is aligned with the student's corresponding original tensor using **Generalized Procrustes Analysis**. This rotates the teacher's tensor to best match the student's vector space without altering its internal structure.
The difference, or `delta`, is then calculated:
`delta = aligned_synthesized_teacher - original_student`
This delta represents the new knowledge to be imparted. To ensure only the most impactful changes are kept, the **DARE-TIES** algorithm is applied to the delta, which prunes the 80% of values with the lowest magnitude and then rescales the remaining values to preserve the tensor's original norm.
### 4. LoRA Matrix Extraction
The final, purified delta tensor holds the essence of the teacher's wisdom. **Singular Value Decomposition (SVD)** is performed on this delta tensor to decompose it into its fundamental components. The most significant components are used to create temporary low-rank `lora_A` and `lora_B` matrices.
### 5. Merging for a Standalone Model
This is the final step. The temporary LoRA matrices (`A` and `B`) are multiplied together (`B @ A`) to reconstruct the purified delta. This delta is then added directly to the weights of the original student model. The resulting tensors are the new, "distilled" weights of the final model, which are saved as a complete, standalone model.
## Intended Use & Limitations
Primary: Generate complete, single-file websites (landing pages, marketing pages, simple docs) with semantic HTML and Tailwind classes.
Secondary: Component blocks (hero, pricing, FAQ) for manual composition.
Limitations: It is still a 4B model so you need to be specific with your prompting to get good results. You may have to do a few rerolls to get the best results.
## Distillation Procedure Details
The knowledge transfer was performed using the following configuration for the intermediate delta calculation on the first pass:
- **Teacher Model:** 480B MoE model (Qwen3-Coder-480B)
- **Student Model:** 4B Dense model (`WEBGEN-4B-Preview`)
- **Intermediate LoRA Rank:** `2560`
- **Intermediate LoRA Alpha:** `2560`
- **DARE-TIES Drop Rate:** `0.80`
- **Experts Blended per MLP:** `64` out of `256`
The knowledge transfer was performed using the following configuration for the intermediate delta calculation for the second pass:
- **Teacher Model:** 480B MoE model (Qwen3-Coder-480B)
- **Student Model:** 4B Dense model (`WEBGEN-4B-Preview`)
- **Intermediate LoRA Rank:** `2560`
- **Intermediate LoRA Alpha:** `2560`
- **DARE-TIES Drop Rate:** `0.70`
- **Experts Blended per MLP:** `256` out of `256`
## Citation
@misc{tesslate_webgen_4b_preview_2025,
title = {WEBGEN-4B-Preview: Design-first web generation with a 4B model},
author = {Tesslate Team},
year = {2025},
url = {https://huggingface.co/Tesslate/WEBGEN-4B-Preview}
}
|
ntnu-smil/phi4-all-in-one-w-audio-x-32
|
ntnu-smil
| 2025-09-06T22:44:12 | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-09-06T22:42:29 |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
omerbkts/blockassist-bc-keen_fast_giraffe_1757198237
|
omerbkts
| 2025-09-06T22:38:18 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T22:37:40 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
parlange/pit-gravit-s3
|
parlange
| 2025-09-06T22:16:37 | 0 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"vision-transformer",
"image-classification",
"pit",
"gravitational-lensing",
"strong-lensing",
"astronomy",
"astrophysics",
"dataset:parlange/gravit-c21",
"arxiv:2509.00226",
"license:apache-2.0",
"model-index",
"region:us"
] |
image-classification
| 2025-09-06T22:15:55 |
---
license: apache-2.0
tags:
- vision-transformer
- image-classification
- pytorch
- timm
- pit
- gravitational-lensing
- strong-lensing
- astronomy
- astrophysics
datasets:
- parlange/gravit-c21
metrics:
- accuracy
- auc
- f1
paper:
- title: "GraViT: A Gravitational Lens Discovery Toolkit with Vision Transformers"
url: "https://arxiv.org/abs/2509.00226"
authors: "Parlange et al."
model-index:
- name: PiT-s3
results:
- task:
type: image-classification
name: Strong Gravitational Lens Discovery
dataset:
type: common-test-sample
name: Common Test Sample (More et al. 2024)
metrics:
- type: accuracy
value: 0.6007
name: Average Accuracy
- type: auc
value: 0.8019
name: Average AUC-ROC
- type: f1
value: 0.4270
name: Average F1-Score
---
# 🌌 pit-gravit-s3
🔭 This model is part of **GraViT**: Transfer Learning with Vision Transformers and MLP-Mixer for Strong Gravitational Lens Discovery
🔗 **GitHub Repository**: [https://github.com/parlange/gravit](https://github.com/parlange/gravit)
## 🛰️ Model Details
- **🤖 Model Type**: PiT
- **🧪 Experiment**: S3 - C21-all-blocks-ResNet18-18660
- **🌌 Dataset**: C21
- **🪐 Fine-tuning Strategy**: all-blocks
- **🎲 Random Seed**: 18660
## 💻 Quick Start
```python
import torch
import timm
# Load the model directly from the Hub
model = timm.create_model(
'hf-hub:parlange/pit-gravit-s3',
pretrained=True
)
model.eval()
# Example inference
dummy_input = torch.randn(1, 3, 224, 224)
with torch.no_grad():
output = model(dummy_input)
predictions = torch.softmax(output, dim=1)
print(f"Lens probability: {predictions[0][1]:.4f}")
```
## ⚡️ Training Configuration
**Training Dataset:** C21 (Cañameras et al. 2021)
**Fine-tuning Strategy:** all-blocks
| 🔧 Parameter | 📝 Value |
|--------------|----------|
| Batch Size | 192 |
| Learning Rate | AdamW with ReduceLROnPlateau |
| Epochs | 100 |
| Patience | 10 |
| Optimizer | AdamW |
| Scheduler | ReduceLROnPlateau |
| Image Size | 224x224 |
| Fine Tune Mode | all_blocks |
| Stochastic Depth Probability | 0.1 |
## 📈 Training Curves

## 🏁 Final Epoch Training Metrics
| Metric | Training | Validation |
|:---------:|:-----------:|:-------------:|
| 📉 Loss | 0.0071 | 0.0399 |
| 🎯 Accuracy | 0.9973 | 0.9910 |
| 📊 AUC-ROC | 1.0000 | 0.9995 |
| ⚖️ F1 Score | 0.9973 | 0.9910 |
## ☑️ Evaluation Results
### ROC Curves and Confusion Matrices
Performance across all test datasets (a through l) in the Common Test Sample (More et al. 2024):












### 📋 Performance Summary
Average performance across 12 test datasets from the Common Test Sample (More et al. 2024):
| Metric | Value |
|-----------|----------|
| 🎯 Average Accuracy | 0.6007 |
| 📈 Average AUC-ROC | 0.8019 |
| ⚖️ Average F1-Score | 0.4270 |
## 📘 Citation
If you use this model in your research, please cite:
```bibtex
@misc{parlange2025gravit,
title={GraViT: Transfer Learning with Vision Transformers and MLP-Mixer for Strong Gravitational Lens Discovery},
author={René Parlange and Juan C. Cuevas-Tello and Octavio Valenzuela and Omar de J. Cabrera-Rosas and Tomás Verdugo and Anupreeta More and Anton T. Jaelani},
year={2025},
eprint={2509.00226},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.00226},
}
```
---
## Model Card Contact
For questions about this model, please contact the author through: https://github.com/parlange/
|
cwayneconnor/blockassist-bc-mute_loud_lynx_1757196278
|
cwayneconnor
| 2025-09-06T22:06:37 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute loud lynx",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T22:06:10 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute loud lynx
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
parlange/cait-gravit-s1
|
parlange
| 2025-09-06T22:05:15 | 0 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"vision-transformer",
"image-classification",
"cait",
"gravitational-lensing",
"strong-lensing",
"astronomy",
"astrophysics",
"dataset:parlange/gravit-c21",
"arxiv:2509.00226",
"license:apache-2.0",
"model-index",
"region:us"
] |
image-classification
| 2025-09-06T22:05:10 |
---
license: apache-2.0
tags:
- vision-transformer
- image-classification
- pytorch
- timm
- cait
- gravitational-lensing
- strong-lensing
- astronomy
- astrophysics
datasets:
- parlange/gravit-c21
metrics:
- accuracy
- auc
- f1
paper:
- title: "GraViT: A Gravitational Lens Discovery Toolkit with Vision Transformers"
url: "https://arxiv.org/abs/2509.00226"
authors: "Parlange et al."
model-index:
- name: CaiT-s1
results:
- task:
type: image-classification
name: Strong Gravitational Lens Discovery
dataset:
type: common-test-sample
name: Common Test Sample (More et al. 2024)
metrics:
- type: accuracy
value: 0.8044
name: Average Accuracy
- type: auc
value: 0.8214
name: Average AUC-ROC
- type: f1
value: 0.5057
name: Average F1-Score
---
# 🌌 cait-gravit-s1
🔭 This model is part of **GraViT**: Transfer Learning with Vision Transformers and MLP-Mixer for Strong Gravitational Lens Discovery
🔗 **GitHub Repository**: [https://github.com/parlange/gravit](https://github.com/parlange/gravit)
## 🛰️ Model Details
- **🤖 Model Type**: CaiT
- **🧪 Experiment**: S1 - C21-classification-head-18660
- **🌌 Dataset**: C21
- **🪐 Fine-tuning Strategy**: classification-head
- **🎲 Random Seed**: 18660
## 💻 Quick Start
```python
import torch
import timm
# Load the model directly from the Hub
model = timm.create_model(
'hf-hub:parlange/cait-gravit-s1',
pretrained=True
)
model.eval()
# Example inference
dummy_input = torch.randn(1, 3, 224, 224)
with torch.no_grad():
output = model(dummy_input)
predictions = torch.softmax(output, dim=1)
print(f"Lens probability: {predictions[0][1]:.4f}")
```
## ⚡️ Training Configuration
**Training Dataset:** C21 (Cañameras et al. 2021)
**Fine-tuning Strategy:** classification-head
| 🔧 Parameter | 📝 Value |
|--------------|----------|
| Batch Size | 192 |
| Learning Rate | AdamW with ReduceLROnPlateau |
| Epochs | 100 |
| Patience | 10 |
| Optimizer | AdamW |
| Scheduler | ReduceLROnPlateau |
| Image Size | 224x224 |
| Fine Tune Mode | classification_head |
| Stochastic Depth Probability | 0.1 |
## 📈 Training Curves

## 🏁 Final Epoch Training Metrics
| Metric | Training | Validation |
|:---------:|:-----------:|:-------------:|
| 📉 Loss | 0.2892 | 0.3902 |
| 🎯 Accuracy | 0.8783 | 0.8440 |
| 📊 AUC-ROC | 0.9502 | 0.9176 |
| ⚖️ F1 Score | 0.8788 | 0.8431 |
## ☑️ Evaluation Results
### ROC Curves and Confusion Matrices
Performance across all test datasets (a through l) in the Common Test Sample (More et al. 2024):












### 📋 Performance Summary
Average performance across 12 test datasets from the Common Test Sample (More et al. 2024):
| Metric | Value |
|-----------|----------|
| 🎯 Average Accuracy | 0.8044 |
| 📈 Average AUC-ROC | 0.8214 |
| ⚖️ Average F1-Score | 0.5057 |
## 📘 Citation
If you use this model in your research, please cite:
```bibtex
@misc{parlange2025gravit,
title={GraViT: Transfer Learning with Vision Transformers and MLP-Mixer for Strong Gravitational Lens Discovery},
author={René Parlange and Juan C. Cuevas-Tello and Octavio Valenzuela and Omar de J. Cabrera-Rosas and Tomás Verdugo and Anupreeta More and Anton T. Jaelani},
year={2025},
eprint={2509.00226},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.00226},
}
```
---
## Model Card Contact
For questions about this model, please contact the author through: https://github.com/parlange/
|
bah63843/blockassist-bc-plump_fast_antelope_1757196010
|
bah63843
| 2025-09-06T22:01:01 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T22:00:52 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
auto-space/distrostore
|
auto-space
| 2025-09-06T22:00:14 | 0 | 0 | null |
[
"region:us"
] | null | 2025-01-02T16:01:40 |
---
title: Distrostore
emoji: 🏢
colorFrom: green
colorTo: blue
sdk: docker
pinned: false
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
parlange/deit-gravit-a1
|
parlange
| 2025-09-06T21:59:17 | 0 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"vision-transformer",
"image-classification",
"deit",
"gravitational-lensing",
"strong-lensing",
"astronomy",
"astrophysics",
"dataset:parlange/gravit-c21",
"arxiv:2509.00226",
"license:apache-2.0",
"model-index",
"region:us"
] |
image-classification
| 2025-09-06T21:35:32 |
---
license: apache-2.0
tags:
- vision-transformer
- image-classification
- pytorch
- timm
- deit
- gravitational-lensing
- strong-lensing
- astronomy
- astrophysics
datasets:
- parlange/gravit-c21
metrics:
- accuracy
- auc
- f1
paper:
- title: "GraViT: A Gravitational Lens Discovery Toolkit with Vision Transformers"
url: "https://arxiv.org/abs/2509.00226"
authors: "Parlange et al."
model-index:
- name: DeiT-a1
results:
- task:
type: image-classification
name: Strong Gravitational Lens Discovery
dataset:
type: common-test-sample
name: Common Test Sample (More et al. 2024)
metrics:
- type: accuracy
value: 0.8007
name: Average Accuracy
- type: auc
value: 0.8275
name: Average AUC-ROC
- type: f1
value: 0.5055
name: Average F1-Score
---
# 🌌 deit-gravit-a1
🔭 This model is part of **GraViT**: Transfer Learning with Vision Transformers and MLP-Mixer for Strong Gravitational Lens Discovery
🔗 **GitHub Repository**: [https://github.com/parlange/gravit](https://github.com/parlange/gravit)
## 🛰️ Model Details
- **🤖 Model Type**: DeiT
- **🧪 Experiment**: A1 - C21-classification-head
- **🌌 Dataset**: C21
- **🪐 Fine-tuning Strategy**: classification-head
## 💻 Quick Start
```python
import torch
import timm
# Load the model directly from the Hub
model = timm.create_model(
'hf-hub:parlange/deit-gravit-a1',
pretrained=True
)
model.eval()
# Example inference
dummy_input = torch.randn(1, 3, 224, 224)
with torch.no_grad():
output = model(dummy_input)
predictions = torch.softmax(output, dim=1)
print(f"Lens probability: {predictions[0][1]:.4f}")
```
## ⚡️ Training Configuration
**Training Dataset:** C21 (Cañameras et al. 2021)
**Fine-tuning Strategy:** classification-head
| 🔧 Parameter | 📝 Value |
|--------------|----------|
| Batch Size | 192 |
| Learning Rate | AdamW with ReduceLROnPlateau |
| Epochs | 100 |
| Patience | 10 |
| Optimizer | AdamW |
| Scheduler | ReduceLROnPlateau |
| Image Size | 224x224 |
| Fine Tune Mode | classification_head |
| Stochastic Depth Probability | 0.1 |
## 📈 Training Curves

## 🏁 Final Epoch Training Metrics
| Metric | Training | Validation |
|:---------:|:-----------:|:-------------:|
| 📉 Loss | 0.1886 | 0.2568 |
| 🎯 Accuracy | 0.9267 | 0.9130 |
| 📊 AUC-ROC | 0.9788 | 0.9631 |
| ⚖️ F1 Score | 0.9268 | 0.9136 |
## ☑️ Evaluation Results
### ROC Curves and Confusion Matrices
Performance across all test datasets (a through l) in the Common Test Sample (More et al. 2024):












### 📋 Performance Summary
Average performance across 12 test datasets from the Common Test Sample (More et al. 2024):
| Metric | Value |
|-----------|----------|
| 🎯 Average Accuracy | 0.8007 |
| 📈 Average AUC-ROC | 0.8275 |
| ⚖️ Average F1-Score | 0.5055 |
## 📘 Citation
If you use this model in your research, please cite:
```bibtex
@misc{parlange2025gravit,
title={GraViT: Transfer Learning with Vision Transformers and MLP-Mixer for Strong Gravitational Lens Discovery},
author={René Parlange and Juan C. Cuevas-Tello and Octavio Valenzuela and Omar de J. Cabrera-Rosas and Tomás Verdugo and Anupreeta More and Anton T. Jaelani},
year={2025},
eprint={2509.00226},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.00226},
}
```
---
## Model Card Contact
For questions about this model, please contact the author through: https://github.com/parlange/
|
NahedDom/blockassist-bc-flapping_stocky_leopard_1757193663
|
NahedDom
| 2025-09-06T21:56:19 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping stocky leopard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T21:56:15 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping stocky leopard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
GroomerG/blockassist-bc-vicious_pawing_badger_1757194271
|
GroomerG
| 2025-09-06T21:54:26 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious pawing badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T21:54:23 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious pawing badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
parlange/twins_svt-gravit-b2
|
parlange
| 2025-09-06T21:53:21 | 0 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"vision-transformer",
"image-classification",
"twins_svt",
"gravitational-lensing",
"strong-lensing",
"astronomy",
"astrophysics",
"dataset:J24",
"arxiv:2509.00226",
"license:apache-2.0",
"model-index",
"region:us"
] |
image-classification
| 2025-09-06T21:53:16 |
---
license: apache-2.0
tags:
- vision-transformer
- image-classification
- pytorch
- timm
- twins_svt
- gravitational-lensing
- strong-lensing
- astronomy
- astrophysics
datasets:
- J24
metrics:
- accuracy
- auc
- f1
model-index:
- name: Twins_SVT-b2
results:
- task:
type: image-classification
name: Strong Gravitational Lens Discovery
dataset:
type: common-test-sample
name: Common Test Sample (More et al. 2024)
metrics:
- type: accuracy
value: 0.7200
name: Average Accuracy
- type: auc
value: 0.6517
name: Average AUC-ROC
- type: f1
value: 0.3307
name: Average F1-Score
---
# 🌌 twins_svt-gravit-b2
🔭 This model is part of **GraViT**: Transfer Learning with Vision Transformers and MLP-Mixer for Strong Gravitational Lens Discovery
🔗 **GitHub Repository**: [https://github.com/parlange/gravit](https://github.com/parlange/gravit)
## 🛰️ Model Details
- **🤖 Model Type**: Twins_SVT
- **🧪 Experiment**: B2 - J24-half
- **🌌 Dataset**: J24
- **🪐 Fine-tuning Strategy**: half
## 💻 Quick Start
```python
import torch
import timm
# Load the model directly from the Hub
model = timm.create_model(
'hf-hub:parlange/twins_svt-gravit-b2',
pretrained=True
)
model.eval()
# Example inference
dummy_input = torch.randn(1, 3, 224, 224)
with torch.no_grad():
output = model(dummy_input)
predictions = torch.softmax(output, dim=1)
print(f"Lens probability: {predictions[0][1]:.4f}")
```
## ⚡️ Training Configuration
**Training Dataset:** J24 (Jaelani et al. 2024)
**Fine-tuning Strategy:** half
| 🔧 Parameter | 📝 Value |
|--------------|----------|
| Batch Size | 192 |
| Learning Rate | AdamW with ReduceLROnPlateau |
| Epochs | 100 |
| Patience | 10 |
| Optimizer | AdamW |
| Scheduler | ReduceLROnPlateau |
| Image Size | 224x224 |
| Fine Tune Mode | half |
| Stochastic Depth Probability | 0.1 |
## 📈 Training Curves

## 🏁 Final Epoch Training Metrics
| Metric | Training | Validation |
|:---------:|:-----------:|:-------------:|
| 📉 Loss | 0.2453 | 0.2262 |
| 🎯 Accuracy | 0.9036 | 0.9148 |
| 📊 AUC-ROC | 0.9625 | 0.9685 |
| ⚖️ F1 Score | 0.9020 | 0.9142 |
## ☑️ Evaluation Results
### ROC Curves and Confusion Matrices
Performance across all test datasets (a through l) in the Common Test Sample (More et al. 2024):












### 📋 Performance Summary
Average performance across 12 test datasets from the Common Test Sample (More et al. 2024):
| Metric | Value |
|-----------|----------|
| 🎯 Average Accuracy | 0.7200 |
| 📈 Average AUC-ROC | 0.6517 |
| ⚖️ Average F1-Score | 0.3307 |
## 📘 Citation
If you use this model in your research, please cite:
```bibtex
@misc{parlange2025gravit,
title={GraViT: Transfer Learning with Vision Transformers and MLP-Mixer for Strong Gravitational Lens Discovery},
author={René Parlange and Juan C. Cuevas-Tello and Octavio Valenzuela and Omar de J. Cabrera-Rosas and Tomás Verdugo and Anupreeta More and Anton T. Jaelani},
year={2025},
eprint={2509.00226},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.00226},
}
```
---
## Model Card Contact
For questions about this model, please contact the author through: https://github.com/parlange/
|
ultratopaz/114332
|
ultratopaz
| 2025-09-06T21:53:14 | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-06T21:53:18 |
[View on Civ Archive](https://civarchive.com/models/138632?modelVersionId=153205)
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1757193763
|
sampingkaca72
| 2025-09-06T21:49:38 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T21:49:33 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
parlange/vit-gravit-b2
|
parlange
| 2025-09-06T21:49:29 | 0 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"vision-transformer",
"image-classification",
"vit",
"gravitational-lensing",
"strong-lensing",
"astronomy",
"astrophysics",
"dataset:J24",
"arxiv:2509.00226",
"license:apache-2.0",
"model-index",
"region:us"
] |
image-classification
| 2025-09-06T21:48:49 |
---
license: apache-2.0
tags:
- vision-transformer
- image-classification
- pytorch
- timm
- vit
- gravitational-lensing
- strong-lensing
- astronomy
- astrophysics
datasets:
- J24
metrics:
- accuracy
- auc
- f1
model-index:
- name: ViT-b2
results:
- task:
type: image-classification
name: Strong Gravitational Lens Discovery
dataset:
type: common-test-sample
name: Common Test Sample (More et al. 2024)
metrics:
- type: accuracy
value: 0.8112
name: Average Accuracy
- type: auc
value: 0.7916
name: Average AUC-ROC
- type: f1
value: 0.4887
name: Average F1-Score
---
# 🌌 vit-gravit-b2
🔭 This model is part of **GraViT**: Transfer Learning with Vision Transformers and MLP-Mixer for Strong Gravitational Lens Discovery
🔗 **GitHub Repository**: [https://github.com/parlange/gravit](https://github.com/parlange/gravit)
## 🛰️ Model Details
- **🤖 Model Type**: ViT
- **🧪 Experiment**: B2 - J24-half
- **🌌 Dataset**: J24
- **🪐 Fine-tuning Strategy**: half
## 💻 Quick Start
```python
import torch
import timm
# Load the model directly from the Hub
model = timm.create_model(
'hf-hub:parlange/vit-gravit-b2',
pretrained=True
)
model.eval()
# Example inference
dummy_input = torch.randn(1, 3, 224, 224)
with torch.no_grad():
output = model(dummy_input)
predictions = torch.softmax(output, dim=1)
print(f"Lens probability: {predictions[0][1]:.4f}")
```
## ⚡️ Training Configuration
**Training Dataset:** J24 (Jaelani et al. 2024)
**Fine-tuning Strategy:** half
| 🔧 Parameter | 📝 Value |
|--------------|----------|
| Batch Size | 192 |
| Learning Rate | AdamW with ReduceLROnPlateau |
| Epochs | 100 |
| Patience | 10 |
| Optimizer | AdamW |
| Scheduler | ReduceLROnPlateau |
| Image Size | 224x224 |
| Fine Tune Mode | half |
| Stochastic Depth Probability | 0.1 |
## 📈 Training Curves

## 🏁 Final Epoch Training Metrics
| Metric | Training | Validation |
|:---------:|:-----------:|:-------------:|
| 📉 Loss | 0.0291 | 0.0761 |
| 🎯 Accuracy | 0.9900 | 0.9802 |
| 📊 AUC-ROC | 0.9992 | 0.9974 |
| ⚖️ F1 Score | 0.9900 | 0.9801 |
## ☑️ Evaluation Results
### ROC Curves and Confusion Matrices
Performance across all test datasets (a through l) in the Common Test Sample (More et al. 2024):












### 📋 Performance Summary
Average performance across 12 test datasets from the Common Test Sample (More et al. 2024):
| Metric | Value |
|-----------|----------|
| 🎯 Average Accuracy | 0.8112 |
| 📈 Average AUC-ROC | 0.7916 |
| ⚖️ Average F1-Score | 0.4887 |
## 📘 Citation
If you use this model in your research, please cite:
```bibtex
@misc{parlange2025gravit,
title={GraViT: Transfer Learning with Vision Transformers and MLP-Mixer for Strong Gravitational Lens Discovery},
author={René Parlange and Juan C. Cuevas-Tello and Octavio Valenzuela and Omar de J. Cabrera-Rosas and Tomás Verdugo and Anupreeta More and Anton T. Jaelani},
year={2025},
eprint={2509.00226},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.00226},
}
```
---
## Model Card Contact
For questions about this model, please contact the author through: https://github.com/parlange/
|
ChenWu98/qwen_2.5_0.5b_sft
|
ChenWu98
| 2025-09-06T21:49:23 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-06T19:17:52 |
---
base_model: Qwen/Qwen2.5-0.5B
library_name: transformers
model_name: qwen_2.5_0.5b_sft
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen_2.5_0.5b_sft
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/5b1nvnn7)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
parlange/deit3-gravit-b1
|
parlange
| 2025-09-06T21:48:05 | 0 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"vision-transformer",
"image-classification",
"deit3",
"gravitational-lensing",
"strong-lensing",
"astronomy",
"astrophysics",
"dataset:J24",
"arxiv:2509.00226",
"license:apache-2.0",
"model-index",
"region:us"
] |
image-classification
| 2025-09-06T21:47:52 |
---
license: apache-2.0
tags:
- vision-transformer
- image-classification
- pytorch
- timm
- deit3
- gravitational-lensing
- strong-lensing
- astronomy
- astrophysics
datasets:
- J24
metrics:
- accuracy
- auc
- f1
model-index:
- name: DeiT3-b1
results:
- task:
type: image-classification
name: Strong Gravitational Lens Discovery
dataset:
type: common-test-sample
name: Common Test Sample (More et al. 2024)
metrics:
- type: accuracy
value: 0.7343
name: Average Accuracy
- type: auc
value: 0.6294
name: Average AUC-ROC
- type: f1
value: 0.3060
name: Average F1-Score
---
# 🌌 deit3-gravit-b1
🔭 This model is part of **GraViT**: Transfer Learning with Vision Transformers and MLP-Mixer for Strong Gravitational Lens Discovery
🔗 **GitHub Repository**: [https://github.com/parlange/gravit](https://github.com/parlange/gravit)
## 🛰️ Model Details
- **🤖 Model Type**: DeiT3
- **🧪 Experiment**: B1 - J24-classification-head
- **🌌 Dataset**: J24
- **🪐 Fine-tuning Strategy**: classification-head
## 💻 Quick Start
```python
import torch
import timm
# Load the model directly from the Hub
model = timm.create_model(
'hf-hub:parlange/deit3-gravit-b1',
pretrained=True
)
model.eval()
# Example inference
dummy_input = torch.randn(1, 3, 224, 224)
with torch.no_grad():
output = model(dummy_input)
predictions = torch.softmax(output, dim=1)
print(f"Lens probability: {predictions[0][1]:.4f}")
```
## ⚡️ Training Configuration
**Training Dataset:** J24 (Jaelani et al. 2024)
**Fine-tuning Strategy:** classification-head
| 🔧 Parameter | 📝 Value |
|--------------|----------|
| Batch Size | 192 |
| Learning Rate | AdamW with ReduceLROnPlateau |
| Epochs | 100 |
| Patience | 10 |
| Optimizer | AdamW |
| Scheduler | ReduceLROnPlateau |
| Image Size | 224x224 |
| Fine Tune Mode | classification_head |
| Stochastic Depth Probability | 0.1 |
## 📈 Training Curves

## 🏁 Final Epoch Training Metrics
| Metric | Training | Validation |
|:---------:|:-----------:|:-------------:|
| 📉 Loss | 0.2381 | 0.2283 |
| 🎯 Accuracy | 0.9057 | 0.9051 |
| 📊 AUC-ROC | 0.9642 | 0.9673 |
| ⚖️ F1 Score | 0.9039 | 0.9044 |
## ☑️ Evaluation Results
### ROC Curves and Confusion Matrices
Performance across all test datasets (a through l) in the Common Test Sample (More et al. 2024):












### 📋 Performance Summary
Average performance across 12 test datasets from the Common Test Sample (More et al. 2024):
| Metric | Value |
|-----------|----------|
| 🎯 Average Accuracy | 0.7343 |
| 📈 Average AUC-ROC | 0.6294 |
| ⚖️ Average F1-Score | 0.3060 |
## 📘 Citation
If you use this model in your research, please cite:
```bibtex
@misc{parlange2025gravit,
title={GraViT: Transfer Learning with Vision Transformers and MLP-Mixer for Strong Gravitational Lens Discovery},
author={René Parlange and Juan C. Cuevas-Tello and Octavio Valenzuela and Omar de J. Cabrera-Rosas and Tomás Verdugo and Anupreeta More and Anton T. Jaelani},
year={2025},
eprint={2509.00226},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.00226},
}
```
---
## Model Card Contact
For questions about this model, please contact the author through: https://github.com/parlange/
|
crystalline7/112117
|
crystalline7
| 2025-09-06T21:47:11 | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-06T21:47:15 |
[View on Civ Archive](https://civarchive.com/models/136676?modelVersionId=150774)
|
parlange/deit-gravit-a3
|
parlange
| 2025-09-06T21:41:26 | 0 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"vision-transformer",
"image-classification",
"deit",
"gravitational-lensing",
"strong-lensing",
"astronomy",
"astrophysics",
"dataset:C21",
"arxiv:2509.00226",
"license:apache-2.0",
"model-index",
"region:us"
] |
image-classification
| 2025-09-06T21:40:20 |
---
license: apache-2.0
tags:
- vision-transformer
- image-classification
- pytorch
- timm
- deit
- gravitational-lensing
- strong-lensing
- astronomy
- astrophysics
datasets:
- C21
metrics:
- accuracy
- auc
- f1
model-index:
- name: DeiT-a3
results:
- task:
type: image-classification
name: Strong Gravitational Lens Discovery
dataset:
type: common-test-sample
name: Common Test Sample (More et al. 2024)
metrics:
- type: accuracy
value: 0.8420
name: Average Accuracy
- type: auc
value: 0.8515
name: Average AUC-ROC
- type: f1
value: 0.5552
name: Average F1-Score
---
# 🌌 deit-gravit-a3
🔭 This model is part of **GraViT**: Transfer Learning with Vision Transformers and MLP-Mixer for Strong Gravitational Lens Discovery
🔗 **GitHub Repository**: [https://github.com/parlange/gravit](https://github.com/parlange/gravit)
## 🛰️ Model Details
- **🤖 Model Type**: DeiT
- **🧪 Experiment**: A3 - C21-all-blocks-ResNet18
- **🌌 Dataset**: C21
- **🪐 Fine-tuning Strategy**: all-blocks
## 💻 Quick Start
```python
import torch
import timm
# Load the model directly from the Hub
model = timm.create_model(
'hf-hub:parlange/deit-gravit-a3',
pretrained=True
)
model.eval()
# Example inference
dummy_input = torch.randn(1, 3, 224, 224)
with torch.no_grad():
output = model(dummy_input)
predictions = torch.softmax(output, dim=1)
print(f"Lens probability: {predictions[0][1]:.4f}")
```
## ⚡️ Training Configuration
**Training Dataset:** C21 (Cañameras et al. 2021)
**Fine-tuning Strategy:** all-blocks
| 🔧 Parameter | 📝 Value |
|--------------|----------|
| Batch Size | 192 |
| Learning Rate | AdamW with ReduceLROnPlateau |
| Epochs | 100 |
| Patience | 10 |
| Optimizer | AdamW |
| Scheduler | ReduceLROnPlateau |
| Image Size | 224x224 |
| Fine Tune Mode | all_blocks |
| Stochastic Depth Probability | 0.1 |
## 📈 Training Curves

## 🏁 Final Epoch Training Metrics
| Metric | Training | Validation |
|:---------:|:-----------:|:-------------:|
| 📉 Loss | 0.0071 | 0.0252 |
| 🎯 Accuracy | 0.9975 | 0.9950 |
| 📊 AUC-ROC | 1.0000 | 0.9998 |
| ⚖️ F1 Score | 0.9975 | 0.9950 |
## ☑️ Evaluation Results
### ROC Curves and Confusion Matrices
Performance across all test datasets (a through l) in the Common Test Sample (More et al. 2024):












### 📋 Performance Summary
Average performance across 12 test datasets from the Common Test Sample (More et al. 2024):
| Metric | Value |
|-----------|----------|
| 🎯 Average Accuracy | 0.8420 |
| 📈 Average AUC-ROC | 0.8515 |
| ⚖️ Average F1-Score | 0.5552 |
## 📘 Citation
If you use this model in your research, please cite:
```bibtex
@misc{parlange2025gravit,
title={GraViT: Transfer Learning with Vision Transformers and MLP-Mixer for Strong Gravitational Lens Discovery},
author={René Parlange and Juan C. Cuevas-Tello and Octavio Valenzuela and Omar de J. Cabrera-Rosas and Tomás Verdugo and Anupreeta More and Anton T. Jaelani},
year={2025},
eprint={2509.00226},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.00226},
}
```
---
## Model Card Contact
For questions about this model, please contact the author through: https://github.com/parlange/
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1757193136
|
helmutsukocok
| 2025-09-06T21:36:34 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T21:36:30 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757194388
|
bah63843
| 2025-09-06T21:33:49 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T21:33:45 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-keen_fast_giraffe_1757193862
|
omerbektass
| 2025-09-06T21:24:50 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T21:24:45 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nightmedia/Qwen3-Darkest-Jan-v1-256k-ctx-6B-qx6-hi-mlx
|
nightmedia
| 2025-09-06T21:24:33 | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"programming",
"code generation",
"code",
"coding",
"coder",
"chat",
"brainstorm",
"qwen",
"qwencoder",
"brainstorm20x",
"creative",
"all uses cases",
"Jan-V1",
"horror",
"finetune",
"thinking",
"reasoning",
"text-generation",
"conversational",
"en",
"base_model:DavidAU/Qwen3-Darkest-Jan-v1-256k-ctx-6B",
"base_model:quantized:DavidAU/Qwen3-Darkest-Jan-v1-256k-ctx-6B",
"license:apache-2.0",
"6-bit",
"region:us"
] |
text-generation
| 2025-09-06T21:02:12 |
---
license: apache-2.0
base_model: DavidAU/Qwen3-Darkest-Jan-v1-256k-ctx-6B
language:
- en
pipeline_tag: text-generation
tags:
- programming
- code generation
- code
- coding
- coder
- chat
- brainstorm
- qwen
- qwen3
- qwencoder
- brainstorm20x
- creative
- all uses cases
- Jan-V1
- horror
- finetune
- thinking
- reasoning
- mlx
library_name: mlx
---
# Qwen3-Darkest-Jan-v1-256k-ctx-6B-qx6-hi-mlx
This model [Qwen3-Darkest-Jan-v1-256k-ctx-6B-qx6-hi-mlx](https://huggingface.co/Qwen3-Darkest-Jan-v1-256k-ctx-6B-qx6-hi-mlx) was
converted to MLX format from [DavidAU/Qwen3-Darkest-Jan-v1-256k-ctx-6B](https://huggingface.co/DavidAU/Qwen3-Darkest-Jan-v1-256k-ctx-6B)
using mlx-lm version **0.27.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-Darkest-Jan-v1-256k-ctx-6B-qx6-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
grizzle00/blockassist-bc-curious_mimic_antelope_1757191803
|
grizzle00
| 2025-09-06T21:21:56 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"curious mimic antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T21:21:52 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- curious mimic antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757192337
|
bah63843
| 2025-09-06T21:00:01 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:59:51 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/DansPreConfig-24B-i1-GGUF
|
mradermacher
| 2025-09-06T20:59:41 | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:DoppelReflEx/DansPreConfig-24B",
"base_model:quantized:DoppelReflEx/DansPreConfig-24B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-06T13:42:16 |
---
base_model: DoppelReflEx/DansPreConfig-24B
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/DoppelReflEx/DansPreConfig-24B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#DansPreConfig-24B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/DansPreConfig-24B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.i1-IQ1_S.gguf) | i1-IQ1_S | 5.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.i1-IQ2_M.gguf) | i1-IQ2_M | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.i1-IQ3_S.gguf) | i1-IQ3_S | 10.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.i1-Q4_0.gguf) | i1-Q4_0 | 13.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.i1-Q4_1.gguf) | i1-Q4_1 | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/DansPreConfig-24B-i1-GGUF/resolve/main/DansPreConfig-24B.i1-Q6_K.gguf) | i1-Q6_K | 19.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
SkieyFly/pi0-so101_block_to_container_easy-chunk_size_50-freeze_vision_encoder_false-max_offset_16-uaas
|
SkieyFly
| 2025-09-06T20:56:29 | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-09-06T20:53:18 |
---
license: apache-2.0
---
|
bah63843/blockassist-bc-plump_fast_antelope_1757192030
|
bah63843
| 2025-09-06T20:54:47 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:54:38 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DiFors/blockassist-bc-singing_sizable_snake_1757191971
|
DiFors
| 2025-09-06T20:53:39 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:53:32 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gradientdegen/task-13-microsoft-betago
|
gradientdegen
| 2025-09-06T20:51:58 | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"region:us"
] | null | 2025-09-06T20:51:28 |
---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1757191666
|
canoplos112
| 2025-09-06T20:49:43 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:48:22 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Grogun/blockassist-bc-lightfooted_yapping_macaw_1757191626
|
Grogun
| 2025-09-06T20:48:33 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted yapping macaw",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:48:19 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted yapping macaw
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Economist77/blockassist-bc-wily_leaping_bison_1757189117
|
Economist77
| 2025-09-06T20:47:27 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wily leaping bison",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:47:23 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wily leaping bison
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
goppetoyu/blockassist-bc-running_tough_antelope_1757191589
|
goppetoyu
| 2025-09-06T20:46:52 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"running tough antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:46:30 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- running tough antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DiFors/blockassist-bc-singing_sizable_snake_1757191557
|
DiFors
| 2025-09-06T20:46:40 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:46:30 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Viktor-01/blockassist-bc-leaping_humming_finch_1757189008
|
Viktor-01
| 2025-09-06T20:43:08 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"leaping humming finch",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:43:06 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- leaping humming finch
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hamedkharazmi/blockassist-bc-moist_silky_aardvark_1757189959
|
hamedkharazmi
| 2025-09-06T20:42:44 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"moist silky aardvark",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:42:37 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- moist silky aardvark
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
seams01/blockassist-bc-insectivorous_stubby_snake_1757189660
|
seams01
| 2025-09-06T20:40:03 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"insectivorous stubby snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:39:59 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- insectivorous stubby snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DiFors/blockassist-bc-singing_sizable_snake_1757190968
|
DiFors
| 2025-09-06T20:36:45 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:36:36 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bradynapier/all_miniLM_L6_v2_with_attentions_onnx
|
bradynapier
| 2025-09-06T20:29:36 | 8 | 1 |
transformers.js
|
[
"transformers.js",
"onnx",
"bert",
"feature-extraction",
"sentence-similarity",
"en",
"base_model:Qdrant/all_miniLM_L6_v2_with_attentions",
"base_model:quantized:Qdrant/all_miniLM_L6_v2_with_attentions",
"license:apache-2.0",
"region:us"
] |
sentence-similarity
| 2025-09-04T09:46:04 |
---
license: apache-2.0
library_name: transformers.js
language:
- en
pipeline_tag: sentence-similarity
base_model:
- Qdrant/all_miniLM_L6_v2_with_attentions
- sentence-transformers/all-MiniLM-L6-v2
---
ONNX port of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) adjusted to return attention weights.
This model is intended to be used for [BM42 searches](https://qdrant.tech/articles/bm42/).
> Fixes an issue with the [Qdrant version](https://huggingface.co/Qdrant/all_miniLM_L6_v2_with_attentions) not having the onnx folder so transformers.js cant use it.
### Usage
> Note:
> This model is supposed to be used with Qdrant. Vectors have to be configured with [Modifier.IDF](https://qdrant.tech/documentation/concepts/indexing/?q=modifier#idf-modifier).
```typescript
import { AutoTokenizer, AutoModel, TokenizerModel } from '@xenova/transformers';
documents = [
"You should stay, study and sprint.",
"History can only prepare us to be surprised yet again.",
]
const MODEL_ID = "bradynapier/all_miniLM_L6_v2_with_attentions_onnx"
const tokenizer = await AutoTokenizer.from_pretrained(MODEL_ID, {
revision: 'main',
})
// this has some useful utils that transforms py has in the tokenizer ...
const tokenizerModel = TokenizerModel.fromConfig(tokenizer.model.config)
const model = await AutoModel.from_pretrained(MODEL_ID, {
quantized: false,
revision: 'main',
});
// the types are wildy incorrect... but this should get you what you need!
```
#### Rough Outline of Getting Attentions
> This may not be the best way but the documentation is truly lacking and this does the job :-P
```typescript
/**
* Minimal attention tensor shape we rely on.
* Only `dims` and `data` are used (dims = [B=1, H, T, T]).
*/
type XtTensor = { dims: number[]; data: ArrayLike<number | bigint> };
/**
* Collect attentions across layers from a model.forward(...) output.
*
* ⚠️ Transformers.js variation:
* - Some builds return `{ attentions: Tensor[] }`.
* - Others return a dict with `attention_1`, `attention_2`, ... per layer.
*
* @internal
* @param out Raw dictionary from `model.forward(...)`.
* @returns Array of attention tensors (one per layer) with dims `[1, H, T, T]`.
*/
function collectAttentions(out: Record<string, Tensor>): XtTensor[] {
// Prefer array form if present (runtime feature; TS types don’t guarantee it).
const anyOut = out as unknown as { attentions?: XtTensor[] };
if (Array.isArray(anyOut.attentions)) return anyOut.attentions;
// Otherwise gather attention_1..attention_N and sort numerically by suffix.
const keys = Object.keys(out)
.filter((k) => /^attention_\d+$/i.test(k))
.sort(
(a, b) => parseInt(a.split('_')[1], 10) - parseInt(b.split('_')[1], 10),
);
return keys.map((k) => out[k] as unknown as XtTensor);
}
function onesMask(n: number): Tensor {
const data = BigInt64Array.from({ length: n }, () => 1n);
return new Tensor('int64', data, [1, n]);
}
/**
* Tokenization:
* Prefer the public callable form `tokenizer(text, {...})` which returns tensors.
* In case your wrapper only exposes a `_call` (private-ish) we fall back to it here.
* The return includes `input_ids` and `attention_mask` tensors.
*/
const enc =
typeof (tokenizer as typeof tokenizer._call) === 'function' ?
// eslint-disable-next-line @typescript-eslint/await-thenable
await (tokenizer as typeof tokenizer._call)(text, {
add_special_tokens: true,
})
: tokenizer._call(text, { add_special_tokens: true }); // <-- documented hack
// Convert tensor buffers (may be BigInt) → number[] for downstream processing.
const input_ids = Array.from(
(enc.input_ids as Tensor).data as ArrayLike<number | bigint>,
).map(Number);
/**
* Forward pass with attentions.
*
* Another "crazy" bit: different Transformers.js builds expose attentions differently. We:
* - accept `{ attentions: Tensor[] }`, or
* - collect `attention_1, attention_2, ...` and sort them.
* Also, `Tensor` has no `.get(...)` so we do **flat buffer indexing** with `dims`.
*/
const out = (await model.forward({
input_ids,
attention_mask: onesMask(input_ids.length),
output_attentions: true,
})) as unknown as Record<string, Tensor>;
const attentions = collectAttentions(out)
```
|
Lrodriolivera/s1-glpi-agent-8B
|
Lrodriolivera
| 2025-09-06T20:23:59 | 22 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-15T21:45:34 |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** RYP CLOUD
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DiFors/blockassist-bc-singing_sizable_snake_1757190152
|
DiFors
| 2025-09-06T20:23:17 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:23:10 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DiFors/blockassist-bc-singing_sizable_snake_1757190150
|
DiFors
| 2025-09-06T20:23:13 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:23:09 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1757190021
|
canoplos112
| 2025-09-06T20:22:22 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:20:56 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DiFors/blockassist-bc-singing_sizable_snake_1757190026
|
DiFors
| 2025-09-06T20:21:00 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:20:54 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
poeryouy/blockassist-bc-melodic_shiny_coral_1757190016
|
poeryouy
| 2025-09-06T20:20:41 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"melodic shiny coral",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:20:17 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- melodic shiny coral
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757189919
|
bah63843
| 2025-09-06T20:19:30 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:19:22 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
acidjp/blockassist-bc-pesty_extinct_prawn_1757187606
|
acidjp
| 2025-09-06T20:19:25 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:19:21 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
giovannidemuri/llama8b-er-v595-seed2-hx_lora
|
giovannidemuri
| 2025-09-06T20:18:53 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-06T17:02:37 |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chainway9/blockassist-bc-untamed_quick_eel_1757188298
|
chainway9
| 2025-09-06T20:17:06 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:17:02 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1757189657
|
canoplos112
| 2025-09-06T20:16:16 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:14:53 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Miracle-man/blockassist-bc-singing_lithe_koala_1757187827
|
Miracle-man
| 2025-09-06T20:15:04 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing lithe koala",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:15:01 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing lithe koala
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ebyte/qwimg
|
Ebyte
| 2025-09-06T20:14:42 | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-06T17:54:02 |
---
license: apache-2.0
---
|
DiFors/blockassist-bc-singing_sizable_snake_1757189551
|
DiFors
| 2025-09-06T20:13:06 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:13:00 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DiFors/blockassist-bc-singing_sizable_snake_1757189420
|
DiFors
| 2025-09-06T20:11:01 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:10:55 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DiFors/blockassist-bc-singing_sizable_snake_1757189368
|
DiFors
| 2025-09-06T20:10:07 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:09:58 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757189130
|
bah63843
| 2025-09-06T20:06:20 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:06:15 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757188962
|
bah63843
| 2025-09-06T20:03:46 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T20:03:34 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Biswajit7890/ICD_unsloth_LLama
|
Biswajit7890
| 2025-09-06T20:00:05 | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/Meta-Llama-3.1-70B-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"arxiv:1910.09700",
"base_model:unsloth/Meta-Llama-3.1-70B-bnb-4bit",
"region:us"
] |
text-generation
| 2025-09-06T17:24:01 |
---
base_model: unsloth/Meta-Llama-3.1-70B-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/Meta-Llama-3.1-70B-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
DiFors/blockassist-bc-singing_sizable_snake_1757188680
|
DiFors
| 2025-09-06T19:58:46 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing sizable snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T19:58:33 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing sizable snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fakir22/blockassist-bc-flapping_peaceful_caterpillar_1757188641
|
fakir22
| 2025-09-06T19:58:01 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping peaceful caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T19:57:57 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping peaceful caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Gopalakrishna12/blockassist-bc-ravenous_horned_cassowary_1757188483
|
Gopalakrishna12
| 2025-09-06T19:55:34 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"ravenous horned cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T19:55:29 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- ravenous horned cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fakir22/blockassist-bc-flapping_peaceful_caterpillar_1757188044
|
fakir22
| 2025-09-06T19:48:04 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping peaceful caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T19:48:01 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping peaceful caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1757187646
|
bah63843
| 2025-09-06T19:41:50 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T19:41:21 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
shahed-05/trianed_model_with_custom_data
|
shahed-05
| 2025-09-06T19:41:42 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-06T19:41:38 |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Hugs288/sdxl-stuff
|
Hugs288
| 2025-09-06T19:41:15 | 0 | 4 | null |
[
"license:gpl-2.0",
"region:us"
] | null | 2025-03-21T23:06:17 |
---
license: gpl-2.0
---
<style>
.custom-table {
table-layout: fixed;
width: 60%;
border-collapse: collapse;
}
.custom-table td {
width: 100%;
vertical-align: top;
}
.custom-image {
width: 100%;
height: auto;
object-fit: cover;
}
stolen from Linaqruf readme for Animagine XL
</style>
<table class="custom-table">
<tr>
<td>
<a href="https://huggingface.co/Hugs288/sdxl-stuff/blob/main/Nyte-Tyde-Hugs-NoobV.safetensors">
<img class="custom-image" src="https://huggingface.co/Hugs288/sdxl-stuff/resolve/main/Nyte-Tyde-Hugs-NoobV.png" alt="a very nice migu">
</a>
</td>
</tr>
</table>
|
eekay/gemma-2b-it-owl-numbers-ft
|
eekay
| 2025-09-06T19:41:07 | 126 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-28T22:07:10 |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cwayneconnor/blockassist-bc-mute_loud_lynx_1757187460
|
cwayneconnor
| 2025-09-06T19:39:33 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute loud lynx",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T19:38:57 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute loud lynx
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-keen_fast_giraffe_1757187411
|
omerbkts
| 2025-09-06T19:38:04 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-06T19:37:11 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dataset Card for Hugging Face Hub Model Cards
This datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in model cards
- analysis of the model card format/content
- topic modelling of model cards
- analysis of the model card metadata
- training language models on model cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md
files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the model card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 1,795