modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-13 18:26:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 558
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-13 18:25:20
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
manancode/opus-mt-fr-id-ctranslate2-android
|
manancode
| 2025-08-20T12:12:35Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:12:24Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-id-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-id` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-id
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
Erdeniz/fine_tuned_financial_analysis_intent_classification_turkcell_aiops_grup3
|
Erdeniz
| 2025-08-20T12:12:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T12:12:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
manancode/opus-mt-fr-hu-ctranslate2-android
|
manancode
| 2025-08-20T12:12:21Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:12:13Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-hu-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-hu` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-hu
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
tensorblock/sophiargh_MNLP_M2_mcqa_model-GGUF
|
tensorblock
| 2025-08-20T12:12:00Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"TensorBlock",
"GGUF",
"base_model:sophiargh/MNLP_M2_mcqa_model",
"base_model:quantized:sophiargh/MNLP_M2_mcqa_model",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-20T12:04:42Z |
---
library_name: transformers
license: apache-2.0
base_model: sophiargh/MNLP_M2_mcqa_model
tags:
- generated_from_trainer
- TensorBlock
- GGUF
model-index:
- name: MNLP_M2_mcqa_model2
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## sophiargh/MNLP_M2_mcqa_model - GGUF
<div style="text-align: left; margin: 20px 0;">
<a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Join our Discord to learn more about what we're building ↗
</a>
</div>
This repo contains GGUF format model files for [sophiargh/MNLP_M2_mcqa_model](https://huggingface.co/sophiargh/MNLP_M2_mcqa_model).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th colspan="2" style="font-size: 25px;">Forge</th>
</tr>
<tr>
<th colspan="2">
<img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
</th>
</tr>
<tr>
<th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
</tr>
<tr>
<th colspan="2">
<a href="https://github.com/TensorBlock/forge" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">🚀 Try it now! 🚀</a>
</th>
</tr>
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [MNLP_M2_mcqa_model-Q2_K.gguf](https://huggingface.co/tensorblock/sophiargh_MNLP_M2_mcqa_model-GGUF/blob/main/MNLP_M2_mcqa_model-Q2_K.gguf) | Q2_K | 0.296 GB | smallest, significant quality loss - not recommended for most purposes |
| [MNLP_M2_mcqa_model-Q3_K_S.gguf](https://huggingface.co/tensorblock/sophiargh_MNLP_M2_mcqa_model-GGUF/blob/main/MNLP_M2_mcqa_model-Q3_K_S.gguf) | Q3_K_S | 0.323 GB | very small, high quality loss |
| [MNLP_M2_mcqa_model-Q3_K_M.gguf](https://huggingface.co/tensorblock/sophiargh_MNLP_M2_mcqa_model-GGUF/blob/main/MNLP_M2_mcqa_model-Q3_K_M.gguf) | Q3_K_M | 0.347 GB | very small, high quality loss |
| [MNLP_M2_mcqa_model-Q3_K_L.gguf](https://huggingface.co/tensorblock/sophiargh_MNLP_M2_mcqa_model-GGUF/blob/main/MNLP_M2_mcqa_model-Q3_K_L.gguf) | Q3_K_L | 0.368 GB | small, substantial quality loss |
| [MNLP_M2_mcqa_model-Q4_0.gguf](https://huggingface.co/tensorblock/sophiargh_MNLP_M2_mcqa_model-GGUF/blob/main/MNLP_M2_mcqa_model-Q4_0.gguf) | Q4_0 | 0.382 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MNLP_M2_mcqa_model-Q4_K_S.gguf](https://huggingface.co/tensorblock/sophiargh_MNLP_M2_mcqa_model-GGUF/blob/main/MNLP_M2_mcqa_model-Q4_K_S.gguf) | Q4_K_S | 0.383 GB | small, greater quality loss |
| [MNLP_M2_mcqa_model-Q4_K_M.gguf](https://huggingface.co/tensorblock/sophiargh_MNLP_M2_mcqa_model-GGUF/blob/main/MNLP_M2_mcqa_model-Q4_K_M.gguf) | Q4_K_M | 0.397 GB | medium, balanced quality - recommended |
| [MNLP_M2_mcqa_model-Q5_0.gguf](https://huggingface.co/tensorblock/sophiargh_MNLP_M2_mcqa_model-GGUF/blob/main/MNLP_M2_mcqa_model-Q5_0.gguf) | Q5_0 | 0.437 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MNLP_M2_mcqa_model-Q5_K_S.gguf](https://huggingface.co/tensorblock/sophiargh_MNLP_M2_mcqa_model-GGUF/blob/main/MNLP_M2_mcqa_model-Q5_K_S.gguf) | Q5_K_S | 0.437 GB | large, low quality loss - recommended |
| [MNLP_M2_mcqa_model-Q5_K_M.gguf](https://huggingface.co/tensorblock/sophiargh_MNLP_M2_mcqa_model-GGUF/blob/main/MNLP_M2_mcqa_model-Q5_K_M.gguf) | Q5_K_M | 0.444 GB | large, very low quality loss - recommended |
| [MNLP_M2_mcqa_model-Q6_K.gguf](https://huggingface.co/tensorblock/sophiargh_MNLP_M2_mcqa_model-GGUF/blob/main/MNLP_M2_mcqa_model-Q6_K.gguf) | Q6_K | 0.495 GB | very large, extremely low quality loss |
| [MNLP_M2_mcqa_model-Q8_0.gguf](https://huggingface.co/tensorblock/sophiargh_MNLP_M2_mcqa_model-GGUF/blob/main/MNLP_M2_mcqa_model-Q8_0.gguf) | Q8_0 | 0.639 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/sophiargh_MNLP_M2_mcqa_model-GGUF --include "MNLP_M2_mcqa_model-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/sophiargh_MNLP_M2_mcqa_model-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755690455
|
lisaozill03
| 2025-08-20T12:12:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T12:11:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
manancode/opus-mt-fr-ho-ctranslate2-android
|
manancode
| 2025-08-20T12:11:46Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:11:36Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-ho-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-ho` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-ho
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-hil-ctranslate2-android
|
manancode
| 2025-08-20T12:11:34Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:11:25Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-hil-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-hil` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-hil
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-he-ctranslate2-android
|
manancode
| 2025-08-20T12:11:22Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:11:13Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-he-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-he` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-he
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
Grigorij/fanuc_shooting_sim_unity
|
Grigorij
| 2025-08-20T12:11:12Z | 4 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:Grigorij/Shooting_unit_2",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-11T11:20:47Z |
---
base_model: lerobot/smolvla_base
datasets: Grigorij/Shooting_unit_2
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- lerobot
- robotics
- smolvla
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
Youseff1987/Qwen-Image-bnb-8bit-4bit
|
Youseff1987
| 2025-08-20T12:11:09Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"diffusers:QwenImagePipeline",
"region:us"
] | null | 2025-08-20T11:58:05Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
manancode/opus-mt-fr-guw-ctranslate2-android
|
manancode
| 2025-08-20T12:10:56Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:10:42Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-guw-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-guw` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-guw
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
howey/HDT-E
|
howey
| 2025-08-20T12:10:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"hed",
"en",
"dataset:howey/unarXive",
"dataset:howey/wiki_en",
"dataset:howey/hupd",
"arxiv:2407.08330",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-03T21:02:20Z |
---
library_name: transformers
license: apache-2.0
language:
- en
datasets:
- howey/unarXive
- howey/wiki_en
- howey/hupd
---
# Model Weights Comming Soon!
## Using HDT
To use the pre-trained model for masked language modeling, use the following snippet:
```python
from transformers import AutoModelForMaskedLM, AutoTokenizer
# See the `MDLM` collection page on the hub for list of available models.
tokenizer = transformers.AutoTokenizer.from_pretrained('howey/HDT-E')
model_name = 'howey/HDT-E'
model = AutoModelForMaskedLM.from_pretrained(model_name)
```
For more details, please see our github repository: [HDT](https://github.com/autonomousvision/hdt)
## Model Details
The model, which has a context length of `8192` and is similar in size to BERT with approximately `110M` parameters,
was trained on standard masked language modeling task with a Transformer-based architecture using our proposed hierarchical attention.
The training regimen comprised 24 hours on the ArXiv+Wikipedia+HUPD corpus, involving the processing of a total of `1.3 billion` tokens.
For more details, please see our paper: [HDT: Hierarchical Document Transformer](https://arxiv.org/pdf/2407.08330).
## Citation
<!-- If there is a paper or blog post introducing the model, the Bibtex information for that should go in this section. -->
Please cite our work using the bibtex below:
**BibTeX:**
```
@inproceedings{He2024COLM,
title={HDT: Hierarchical Document Transformer},
author={Haoyu He and Markus Flicke and Jan Buchmann and Iryna Gurevych and Andreas Geiger},
year={2024},
booktitle={Conference on Language Modeling}
}
```
## Model Card Contact
Haoyu (haoyu.he@uni-tuebingen.de)
|
manancode/opus-mt-fr-gil-ctranslate2-android
|
manancode
| 2025-08-20T12:10:39Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:10:29Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-gil-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-gil` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-gil
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fi_nb_no_nn_ru_sv_en-SAMI-ctranslate2-android
|
manancode
| 2025-08-20T12:10:26Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:10:17Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fi_nb_no_nn_ru_sv_en-SAMI-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fi_nb_no_nn_ru_sv_en-SAMI` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fi_nb_no_nn_ru_sv_en-SAMI
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
2hpsatt/blockassist-bc-huge_deft_eagle_1755691774
|
2hpsatt
| 2025-08-20T12:10:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T12:10:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
honjox/blockassist-bc-nasty_graceful_cougar_1755691336
|
honjox
| 2025-08-20T12:09:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"nasty graceful cougar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T12:08:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- nasty graceful cougar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
poeerc/tteras
|
poeerc
| 2025-08-20T12:08:59Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-08-20T12:06:44Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/mpc-hc64_deH5iYFAbd.png
text: tteras
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: tteras
---
# tteras
<Gallery />
## Trigger words
You should use `tteras` to trigger the image generation.
## Download model
[Download](/poeerc/tteras/tree/main) them in the Files & versions tab.
|
EmilRyd/gpt-oss-ground-truth-30
|
EmilRyd
| 2025-08-20T12:07:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T12:02:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lavinzco/blockassist-bc-thick_climbing_giraffe_1755686279
|
lavinzco
| 2025-08-20T12:07:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thick climbing giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:33:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thick climbing giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MattBou00/llama-3-2-1b-detox_v1f_round1
|
MattBou00
| 2025-08-20T12:07:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-08-20T12:05:52Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-08-20_11-48-41/final-model")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-08-20_11-48-41/final-model")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-08-20_11-48-41/final-model")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
asteria-life/openalex_articles_v1
|
asteria-life
| 2025-08-20T12:07:00Z | 0 | 0 |
model2vec
|
[
"model2vec",
"safetensors",
"embeddings",
"static-embeddings",
"sentence-transformers",
"license:mit",
"region:us"
] | null | 2025-08-20T12:06:52Z |
---
library_name: model2vec
license: mit
model_name: tmpbynzjmjv
tags:
- embeddings
- static-embeddings
- sentence-transformers
---
# tmpbynzjmjv Model Card
This [Model2Vec](https://github.com/MinishLab/model2vec) model is a distilled version of a Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical. Model2Vec models are the smallest, fastest, and most performant static embedders available. The distilled models are up to 50 times smaller and 500 times faster than traditional Sentence Transformers.
## Installation
Install model2vec using pip:
```
pip install model2vec
```
## Usage
### Using Model2Vec
The [Model2Vec library](https://github.com/MinishLab/model2vec) is the fastest and most lightweight way to run Model2Vec models.
Load this model using the `from_pretrained` method:
```python
from model2vec import StaticModel
# Load a pretrained Model2Vec model
model = StaticModel.from_pretrained("tmpbynzjmjv")
# Compute text embeddings
embeddings = model.encode(["Example sentence"])
```
### Using Sentence Transformers
You can also use the [Sentence Transformers library](https://github.com/UKPLab/sentence-transformers) to load and use the model:
```python
from sentence_transformers import SentenceTransformer
# Load a pretrained Sentence Transformer model
model = SentenceTransformer("tmpbynzjmjv")
# Compute text embeddings
embeddings = model.encode(["Example sentence"])
```
### Distilling a Model2Vec model
You can distill a Model2Vec model from a Sentence Transformer model using the `distill` method. First, install the `distill` extra with `pip install model2vec[distill]`. Then, run the following code:
```python
from model2vec.distill import distill
# Distill a Sentence Transformer model, in this case the BAAI/bge-base-en-v1.5 model
m2v_model = distill(model_name="BAAI/bge-base-en-v1.5", pca_dims=256)
# Save the model
m2v_model.save_pretrained("m2v_model")
```
## How it works
Model2vec creates a small, fast, and powerful model that outperforms other static embedding models by a large margin on all tasks we could find, while being much faster to create than traditional static embedding models such as GloVe. Best of all, you don't need any data to distill a model using Model2Vec.
It works by passing a vocabulary through a sentence transformer model, then reducing the dimensionality of the resulting embeddings using PCA, and finally weighting the embeddings using [SIF weighting](https://openreview.net/pdf?id=SyK00v5xx). During inference, we simply take the mean of all token embeddings occurring in a sentence.
## Additional Resources
- [Model2Vec Repo](https://github.com/MinishLab/model2vec)
- [Model2Vec Base Models](https://huggingface.co/collections/minishlab/model2vec-base-models-66fd9dd9b7c3b3c0f25ca90e)
- [Model2Vec Results](https://github.com/MinishLab/model2vec/tree/main/results)
- [Model2Vec Tutorials](https://github.com/MinishLab/model2vec/tree/main/tutorials)
- [Website](https://minishlab.github.io/)
## Library Authors
Model2Vec was developed by the [Minish Lab](https://github.com/MinishLab) team consisting of [Stephan Tulkens](https://github.com/stephantul) and [Thomas van Dongen](https://github.com/Pringled).
## Citation
Please cite the [Model2Vec repository](https://github.com/MinishLab/model2vec) if you use this model in your work.
```
@article{minishlab2024model2vec,
author = {Tulkens, Stephan and {van Dongen}, Thomas},
title = {Model2Vec: Fast State-of-the-Art Static Embeddings},
year = {2024},
url = {https://github.com/MinishLab/model2vec}
}
```
|
lc700x/Video-Depth-Anything-Small-hf
|
lc700x
| 2025-08-20T12:06:55Z | 0 | 0 |
pytorch
|
[
"pytorch",
"depth-estimation",
"arxiv:2501.12375",
"license:apache-2.0",
"region:us"
] |
depth-estimation
| 2025-08-20T12:05:00Z |
---
license: apache-2.0
library_name: pytorch
pipeline_tag: depth-estimation
---
# Video Depth Anything
This repository contains the model described in [Video Depth Anything: Consistent Depth Estimation for Super-Long Videos](https://huggingface.co/papers/2501.12375).
Project Page: https://videodepthanything.github.io
## About
This model is based on [Depth Anything V2](https://github.com/DepthAnything/Depth-Anything-V2), and can be applied to arbitrarily long videos without compromising quality, consistency, or generalization ability. Compared with other diffusion-based models, it enjoys faster inference speed, fewer parameters, and higher consistent depth accuracy.
## Usage
```bash
git clone https://github.com/DepthAnything/Video-Depth-Anything
cd Video-Depth-Anything
pip install -r requirements.txt
```
Download the checkpoints listed [here](#pre-trained-models) and put them under the `checkpoints` directory.
```bash
bash get_weights.sh
```
### Inference a video
```bash
python3 run.py --input_video ./assets/example_videos/davis_rollercoaster.mp4 --output_dir ./outputs --encoder vitl
```
|
Qybera/LisaV3
|
Qybera
| 2025-08-20T12:04:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"lisa",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T13:49:43Z |
---
license: apache-2.0
language:
- en
metrics:
- accuracy
new_version: Qybera/LisaV3
library_name: transformers
---
# LISA-v3.5: Learning Intelligence with Sensory Awareness
## Developed in Kenya, Africa by the LISA Team
**LISA (Learning Intelligence with Sensory Awareness)** is a cutting-edge multimodal AI system developed in Kenya, Africa, by the dedicated LISA Team. This model represents African innovation in artificial intelligence, built entirely from scratch without relying on pretrained models.
## Core Mission
Build a scalable, perception-focused AI that can:
- **See** and understand visual environments
- **Listen** and process audio/speech
- **Understand** context and situations
- **Interact** intelligently with the environment
- **Learn** continuously from experiences
## Key Features
- **Lisa Architecture**: Built from scratch using ViT-B/16 inspired architectures
- **Computer Vision**: Real-time object detection, depth estimation, and scene understanding
- **Audio Processing**: Speech recognition, sound classification, and emotion detection
- **Multimodal Fusion**: Seamless integration of vision, and audio processing
- **Real-time Processing**: Optimized for live streaming and interactive applications
- **African Innovation**: Proudly developed in Kenya, East Africa
## Quick Start
### Basic Usage
```python
from lisa import LISAModel
import torch
# Load the model - same initialization process
model = LISAModel.from_pretrained("./")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
# Process vision + audio input
result = model.process_multimodal(
image_path="image.jpg", # Visual input - what the model "sees"
audio_path="audio.wav" # Auditory input - what the model "hears"
)
print(result.response)
```
### Streaming Processing
```python
import cv2
import sounddevice as sd
import numpy as np
import threading
from queue import Queue
# Initialize LISA for multimodal streaming
lisa = LISAModel.from_pretrained("./")
lisa.start_streaming()
# Create synchronized queues for audio and video data
audio_queue = Queue(maxsize=10) # Buffer for audio chunks
frame_queue = Queue(maxsize=5) # Buffer for video frames
def audio_callback(indata, frames, time, status):
"""Continuously capture audio and store in queue"""
if not audio_queue.full():
audio_queue.put(indata.copy()) # Store audio chunk for processing
# Start audio stream (runs in background thread)
audio_stream = sd.InputStream(
callback=audio_callback,
channels=1, # Mono audio for simplicity
samplerate=16000, # Standard rate for speech processing
blocksize=1024 # Audio chunk size
)
# Process synchronized video and audio streams
cap = cv2.VideoCapture(0)
audio_stream.start()
while True:
ret, frame = cap.read()
if ret and not audio_queue.empty():
# Get the most recent audio chunk
audio_chunk = audio_queue.get()
# Process both video frame AND audio together
result = lisa.process_multimodal_frame(
frame=frame, # What the AI "sees" right now
audio=audio_chunk # What the AI "hears" right now
)
print(f"Vision: {result.visual_detections}")
print(f"Audio: {result.audio_events}")
print(f"Combined: {result.multimodal_inference}")
# Display with annotations from both modalities
annotated_frame = lisa.annotate_multimodal_frame(frame, result)
cv2.imshow('LISA Vision+Audio', annotated_frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Clean up resources
audio_stream.stop()
cap.release()
cv2.destroyAllWindows()
```
### Vision+Audio Processing
```python
import cv2
import numpy as np
from threading import Thread
import time
# Enhanced callback that processes both audio and synchronized video
def multimodal_callback(audio_chunk, current_frame=None):
"""
This callback now processes both audio and visual information together.
Think of this like how humans naturally combine what they hear with what they see
to understand a conversation or situation more completely.
"""
# Process both modalities together - this is the key difference
result = lisa.process_multimodal_realtime(
audio=audio_chunk, # What the AI hears (speech, sounds, emotions)
frame=current_frame # What the AI sees (faces, gestures, environment)
)
# Now we get richer, cross-modal insights
if result.transcript:
print(f"Speech: {result.transcript}")
# Emotion detection now uses BOTH audio tone AND facial expressions
if result.emotion_scores:
print(f"Voice Emotion: {result.audio_emotion}") # From speech patterns
print(f"Visual Emotion: {result.facial_emotion}") # From facial expressions
print(f"Combined Emotion: {result.fused_emotion}") # Best of both worlds
# New capabilities emerge from combining modalities
if result.speaker_identification:
print(f"Speaker: {result.identified_speaker}") # Match voice to face
if result.attention_focus:
print(f"Looking at: {result.visual_attention}") # Where are they looking while speaking?
# Capture video frames continuously to sync with audio
current_frame = None
cap = cv2.VideoCapture(0)
def capture_frames():
"""
Continuously capture video frames in a separate thread.
This ensures we always have a recent frame available when audio arrives.
Think of this as maintaining a 'visual memory' that stays current.
"""
global current_frame
while True:
ret, frame = cap.read()
if ret:
current_frame = frame # Update the most recent visual context
time.sleep(0.03) # Roughly 30 FPS capture rate
# Start the video capture thread
video_thread = Thread(target=capture_frames, daemon=True)
video_thread.start()
# Modified callback function that includes current visual context
def enhanced_audio_callback(audio_chunk):
"""
This wrapper ensures each audio chunk is processed alongside
the most recent visual frame, creating temporal alignment.
"""
multimodal_callback(audio_chunk, current_frame)
# Start the integrated audio+vision stream
lisa.start_audio_stream(callback=enhanced_audio_callback)
```
- **Temporal Synchronization:** The biggest challenge in multimodal AI is ensuring that what you hear and what you see correspond to the same moment in time. Notice how we maintain a current_frame variable that's continuously updated in a separate thread. This creates a "visual memory" that's always fresh when new audio arrives. Think of it like how your brain automatically coordinates the timing of what your eyes see with what your ears hear.
- **Cross-Modal Enhancement:** The real magic happens in process_multimodal_realtime(). Instead of analyzing speech and visual cues separately, the model can now cross-reference them. For example, if someone says "I'm fine" but their facial expression shows distress, the combined emotion analysis will be more accurate than either modality alone. This mimics human intuition about reading people's true feelings.
- **Emergent Capabilities:** When you combine vision and audio, new possibilities emerge that weren't available with either modality alone. Speaker identification becomes much more robust when you can match a voice to a face. Understanding where someone is looking while they speak adds crucial context about their intent and focus.
- **Threaded Architecture:** Notice how we use a separate thread for video capture. This architectural choice is crucial because audio processing is time-sensitive - you cannot afford to miss audio chunks while waiting for a video frame to process. The threaded approach ensures smooth, real-time operation of both streams.
## Architecture
### Vision Component
- **Lisa ViT-B/16 inspired architecture**
- Patch size: 16x16
- Embedding dimensions: 384 (mini) / 768 (full)
- Multi-head attention layers: 6-12
- Lisa object detection head
- Depth estimation module
### Audio Component
- **Lisa Audio Transformer**
- Sample rate: 16kHz
- Mel-scale features: 80 channels
- CTC-based speech recognition
- Environmental sound classification (50+ classes)
- Emotion detection (7 emotions)
### Multimodal Fusion
- Cross-attention mechanisms
- Temporal synchronization
- Context-aware processing
- Real-time inference capabilities
## Model Specifications
- **Total Parameters**: ~6M (mini) / ~25M (full)
- **Input Modalities**: Images, Audio, Video
- **Output Capabilities**: Object detection, Audio analysis
- **Processing Speed**: Real-time capable
- **Memory Requirements**: 2GB+ RAM recommended
- **Platform Support**: Windows, Linux, macOS
## About the LISA Team
The LISA Team is based in Kenya, East Africa, and is dedicated to advancing artificial intelligence research and development within the African continent. Our mission is to create AI systems that understand and serve diverse communities while maintaining cultural sensitivity and awareness.
**Development Location**: Kenya, East Africa
**Team**: LISA Development Team
**Philosophy**: Building AI from the ground up without dependency on external pretrained models
**Vision**: Democratizing AI development in Africa and beyond
## Self-Awareness Features
LISA is designed with self-awareness capabilities and knows:
- Its development origin: Kenya, Africa
- Its creators: The LISA Team
- Its cultural context: African AI innovation
- Its architectural uniqueness: Built from scratch
- Its mission: Advancing African AI capabilities
## Performance Metrics
- **Object Detection**: mAP@0.5: ~65% (Lisa dataset)
- **Speech Recognition**: WER: ~15% (English)
- **Sound Classification**: Accuracy: ~78% (environmental sounds)
- **Emotion Detection**: F1-Score: ~72% (7 emotions)
- **Processing Speed**: ~30 FPS (vision), ~Real-time (audio)
## Deployment
### Local Deployment
```bash
python deploy.py --host 0.0.0.0 --port 8000
```
### Docker Deployment
```bash
docker build -t lisa-v3.5 .
docker run -p 8000:8000 lisa-v3.5
```
### API Usage
```bash
curl -X POST "http://localhost:8000/process" \
-H "Content-Type: application/json" \
-d '{"audio": "audio.wav", "image_url": "image.jpg"}'
```
## License
This model is released under the Apache 2.0 License. See LICENSE file for details.
## Contributing
We welcome contributions from the global AI community. Please see CONTRIBUTING.md for guidelines.
## Contact
- **Team**: LISA Development Team
- **Location**: Kenya, East Africa
- **Email**: [Contact information](elijahnzeli894@gmail.com)
- **Website**: [Website URL](None)
## Acknowledgments
Special thanks to the Kenyan AI community and African researchers who contributed to making LISA possible. This project represents the growing AI capabilities within Africa and our commitment to technological innovation.
---
**Proudly developed in Kenya, Africa 🇰🇪**
*"LISA represents African innovation in artificial intelligence - built from the ground up with pride, passion, and purpose."*
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755689749
|
katanyasekolah
| 2025-08-20T12:04:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T12:04:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755691341
|
kapalbalap
| 2025-08-20T12:03:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T12:02:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Milica-y-Angel-David-video/ver-link-milica-y-angel-david-erome-debut-angel-david-milica-video
|
Milica-y-Angel-David-video
| 2025-08-20T12:03:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T12:01:06Z |
<animated-image data-catalyst=""><a href="https://cutt.ly/GrH1tFQs" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
team-suzuki/deepseekr1_sft004origin2_20250820120222
|
team-suzuki
| 2025-08-20T12:02:29Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:team-suzuki/DeepSeek-R1-0528-Qwen3-8B_Merged_SFT_SFT_003_origin_2_v001_20250819",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:team-suzuki/DeepSeek-R1-0528-Qwen3-8B_Merged_SFT_SFT_003_origin_2_v001_20250819",
"region:us"
] |
text-generation
| 2025-08-20T12:02:23Z |
---
base_model: team-suzuki/DeepSeek-R1-0528-Qwen3-8B_Merged_SFT_SFT_003_origin_2_v001_20250819
library_name: peft
model_name: fine_tuned_deepseek_sft
tags:
- base_model:adapter:team-suzuki/DeepSeek-R1-0528-Qwen3-8B_Merged_SFT_SFT_003_origin_2_v001_20250819
- lora
- sft
- transformers
- trl
licence: license
pipeline_tag: text-generation
---
# Model Card for fine_tuned_deepseek_sft
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-0528-Qwen3-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- PEFT 0.17.0
- TRL: 0.21.0
- Transformers: 4.56.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755689734
|
indoempatnol
| 2025-08-20T12:01:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T12:01:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
daouncl/blockassist-bc-dextrous_aquatic_rabbit_1755691211
|
daouncl
| 2025-08-20T12:01:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dextrous aquatic rabbit",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T12:01:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dextrous aquatic rabbit
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sushovan9/AERM-deberta-v3-base
|
Sushovan9
| 2025-08-20T12:00:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T12:00:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lc700x/dpt-dinov2-giant-kitti
|
lc700x
| 2025-08-20T11:59:47Z | 0 | 0 | null |
[
"safetensors",
"dpt",
"vision",
"dinov2",
"depth-estimation",
"arxiv:2304.07193",
"arxiv:2103.13413",
"license:apache-2.0",
"region:us"
] |
depth-estimation
| 2025-08-20T11:46:15Z |
---
license: apache-2.0
tags:
- vision
- dinov2
- depth-estimation
inference: false
---
# Model Card: DPT model with DINOv2 backbone
## Model Details
DPT (Dense Prediction Transformer) model with DINOv2 backbone as proposed in [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by Oquab et al.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/dpt_architecture.jpg"
alt="drawing" width="600"/>
<small> DPT architecture. Taken from the <a href="https://arxiv.org/abs/2103.13413" target="_blank">original paper</a>. </small>
### Resources
- [DINOv2 Paper](https://arxiv.org/abs/2304.07193)
- [DPT Paper](https://arxiv.org/abs/2103.13413)
### Use with Transformers
```python
from transformers import AutoImageProcessor, DPTForDepthEstimation
import torch
import numpy as np
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/dpt-dinov2-giant-kitti")
model = DPTForDepthEstimation.from_pretrained("facebook/dpt-dinov2-giant-kitti")
# prepare image for the model
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predicted_depth = outputs.predicted_depth
# interpolate to original size
prediction = torch.nn.functional.interpolate(
predicted_depth.unsqueeze(1),
size=image.size[::-1],
mode="bicubic",
align_corners=False,
)
# visualize the prediction
output = prediction.squeeze().cpu().numpy()
formatted = (output * 255 / np.max(output)).astype("uint8")
depth = Image.fromarray(formatted)
```
## Model Use
### Intended Use
The model is intended to showcase that using the DPT framework with DINOv2 as backbone yields a powerful depth estimator.
### BibTeX entry and citation info
```bibtex
@misc{oquab2023dinov2,
title={DINOv2: Learning Robust Visual Features without Supervision},
author={Maxime Oquab and Timothée Darcet and Théo Moutakanni and Huy Vo and Marc Szafraniec and Vasil Khalidov and Pierre Fernandez and Daniel Haziza and Francisco Massa and Alaaeldin El-Nouby and Mahmoud Assran and Nicolas Ballas and Wojciech Galuba and Russell Howes and Po-Yao Huang and Shang-Wen Li and Ishan Misra and Michael Rabbat and Vasu Sharma and Gabriel Synnaeve and Hu Xu and Hervé Jegou and Julien Mairal and Patrick Labatut and Armand Joulin and Piotr Bojanowski},
year={2023},
eprint={2304.07193},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
lc700x/Distill-Any-Depth-Base-hf
|
lc700x
| 2025-08-20T11:58:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"depth_anything",
"depth-estimation",
"distill-any-depth",
"vision",
"arxiv:2502.19204",
"license:mit",
"endpoints_compatible",
"region:us"
] |
depth-estimation
| 2025-08-20T02:45:15Z |
---
library_name: transformers
license: mit
pipeline_tag: depth-estimation
arxiv: <2502.19204>
tags:
- distill-any-depth
- vision
---
# Distill Any Depth Large - Transformers Version
## Introduction
We present Distill-Any-Depth, a new SOTA monocular depth estimation model trained with our proposed knowledge distillation algorithms. It was introduced in the paper [Distill Any Depth: Distillation Creates a Stronger Monocular Depth Estimator](http://arxiv.org/abs/2502.19204).
This model checkpoint is compatible with the transformers library.
[Online demo](https://huggingface.co/spaces/xingyang1/Distill-Any-Depth).
### How to use
Here is how to use this model to perform zero-shot depth estimation:
```python
from transformers import pipeline
from PIL import Image
import requests
# load pipe
pipe = pipeline(task="depth-estimation", model="xingyang1/Distill-Any-Depth-Large-hf")
# load image
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
# inference
depth = pipe(image)["depth"]
```
Alternatively, you can use the model and processor classes:
```python
from transformers import AutoImageProcessor, AutoModelForDepthEstimation
import torch
import numpy as np
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("xingyang1/Distill-Any-Depth-Large-hf")
model = AutoModelForDepthEstimation.from_pretrained("xingyang1/Distill-Any-Depth-Large-hf")
# prepare image for the model
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# interpolate to original size and visualize the prediction
post_processed_output = image_processor.post_process_depth_estimation(
outputs,
target_sizes=[(image.height, image.width)],
)
predicted_depth = post_processed_output[0]["predicted_depth"]
depth = (predicted_depth - predicted_depth.min()) / (predicted_depth.max() - predicted_depth.min())
depth = depth.detach().cpu().numpy() * 255
depth = Image.fromarray(depth.astype("uint8"))
)
```
If you find this project useful, please consider citing:
```bibtex
@article{he2025distill,
title = {Distill Any Depth: Distillation Creates a Stronger Monocular Depth Estimator},
author = {Xiankang He and Dongyan Guo and Hongji Li and Ruibo Li and Ying Cui and Chi Zhang},
year = {2025},
journal = {arXiv preprint arXiv: 2502.19204}
}
```
## Model Card Author
[Parteek Kamboj](https://huggingface.co/keetrap)
|
MattBou00/llama-3-2-1b-detox_v1f_round1-checkpoint-epoch-60
|
MattBou00
| 2025-08-20T11:58:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-08-20T11:57:46Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-08-20_11-48-41/checkpoints/checkpoint-epoch-60")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-08-20_11-48-41/checkpoints/checkpoint-epoch-60")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-08-20_11-48-41/checkpoints/checkpoint-epoch-60")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
qing223101/blockassist-bc-coiled_stinging_hummingbird_1755689161
|
qing223101
| 2025-08-20T11:58:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"coiled stinging hummingbird",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:58:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- coiled stinging hummingbird
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Mzyhh/vit-base-oxford-iiit-pets
|
Mzyhh
| 2025-08-20T11:58:11Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-20T11:40:05Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-oxford-iiit-pets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1733
- Accuracy: 0.9499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3933 | 1.0 | 370 | 0.2904 | 0.9215 |
| 0.2065 | 2.0 | 740 | 0.2165 | 0.9432 |
| 0.162 | 3.0 | 1110 | 0.1980 | 0.9445 |
| 0.1381 | 4.0 | 1480 | 0.1893 | 0.9513 |
| 0.1487 | 5.0 | 1850 | 0.1872 | 0.9486 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
EmilRyd/gpt-oss-ground-truth-6
|
EmilRyd
| 2025-08-20T11:56:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T11:50:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tarlen/code-search-net-tokenizer
|
tarlen
| 2025-08-20T11:56:30Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T11:56:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Xyeru/llama-8b-touch-rugby
|
Xyeru
| 2025-08-20T11:56:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T11:26:03Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Xyeru
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
chainway9/blockassist-bc-untamed_quick_eel_1755689337
|
chainway9
| 2025-08-20T11:56:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:56:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755690898
|
kapalbalap
| 2025-08-20T11:56:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:55:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AngelinaZanardi/multilingual-e5-base-edu-scorer-lr3e4-bs32-swe
|
AngelinaZanardi
| 2025-08-20T11:55:59Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:intfloat/multilingual-e5-base",
"base_model:finetune:intfloat/multilingual-e5-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-11T07:19:25Z |
---
library_name: transformers
license: mit
base_model: intfloat/multilingual-e5-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
model-index:
- name: multilingual-e5-base-edu-scorer-lr3e4-bs32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multilingual-e5-base-edu-scorer-lr3e4-bs32
This model is a fine-tuned version of [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7551
- Precision: 0.3916
- Recall: 0.3720
- F1 Macro: 0.3704
- Accuracy: 0.4709
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 Macro | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:---------:|:------:|:--------:|:--------:|
| No log | 0 | 0 | 4.8374 | 0.0513 | 0.1667 | 0.0785 | 0.3080 |
| 1.0429 | 0.6793 | 1000 | 0.9472 | 0.3859 | 0.3207 | 0.3148 | 0.4320 |
| 1.0195 | 1.3587 | 2000 | 0.8833 | 0.4146 | 0.3355 | 0.3344 | 0.4225 |
| 1.0332 | 2.0380 | 3000 | 0.8693 | 0.4213 | 0.3342 | 0.3294 | 0.4111 |
| 0.9677 | 2.7174 | 4000 | 0.8591 | 0.4387 | 0.3494 | 0.3501 | 0.4213 |
| 0.9779 | 3.3967 | 5000 | 0.8598 | 0.4207 | 0.3584 | 0.3632 | 0.4550 |
| 0.932 | 4.0761 | 6000 | 0.8469 | 0.4429 | 0.3681 | 0.3717 | 0.4496 |
| 0.9421 | 4.7554 | 7000 | 0.8427 | 0.4376 | 0.3658 | 0.3717 | 0.4655 |
| 0.9552 | 5.4348 | 8000 | 0.8377 | 0.4364 | 0.3610 | 0.3682 | 0.4607 |
| 0.9298 | 6.1141 | 9000 | 0.8297 | 0.4580 | 0.3841 | 0.3918 | 0.4655 |
| 0.9307 | 6.7935 | 10000 | 0.8211 | 0.4542 | 0.3728 | 0.3786 | 0.4582 |
| 0.8988 | 7.4728 | 11000 | 0.8164 | 0.4440 | 0.3584 | 0.3597 | 0.4480 |
| 0.9219 | 8.1522 | 12000 | 0.8119 | 0.4617 | 0.3704 | 0.3763 | 0.4534 |
| 0.9289 | 8.8315 | 13000 | 0.8037 | 0.4422 | 0.3710 | 0.3780 | 0.4706 |
| 0.9109 | 9.5109 | 14000 | 0.8214 | 0.4588 | 0.3643 | 0.3659 | 0.4363 |
| 0.9017 | 10.1902 | 15000 | 0.8189 | 0.4425 | 0.3839 | 0.3864 | 0.4534 |
| 0.9117 | 10.8696 | 16000 | 0.8061 | 0.4542 | 0.3778 | 0.3837 | 0.4595 |
| 0.8836 | 11.5489 | 17000 | 0.7869 | 0.4671 | 0.3753 | 0.3825 | 0.4730 |
| 0.8749 | 12.2283 | 18000 | 0.7920 | 0.4604 | 0.3793 | 0.3863 | 0.4665 |
| 0.9009 | 12.9076 | 19000 | 0.7990 | 0.4645 | 0.3689 | 0.3753 | 0.4786 |
| 0.8984 | 13.5870 | 20000 | 0.7838 | 0.4652 | 0.3806 | 0.3899 | 0.4754 |
| 0.8525 | 14.2663 | 21000 | 0.7947 | 0.4478 | 0.3693 | 0.3758 | 0.4835 |
| 0.8514 | 14.9457 | 22000 | 0.7822 | 0.4734 | 0.3882 | 0.3976 | 0.4720 |
| 0.8796 | 15.625 | 23000 | 0.7804 | 0.4810 | 0.3823 | 0.3917 | 0.4706 |
| 0.8722 | 16.3043 | 24000 | 0.7820 | 0.4803 | 0.3819 | 0.3920 | 0.4738 |
| 0.8712 | 16.9837 | 25000 | 0.7815 | 0.4716 | 0.3845 | 0.3939 | 0.4679 |
| 0.8824 | 17.6630 | 26000 | 0.7827 | 0.4680 | 0.3726 | 0.3793 | 0.4792 |
| 0.8344 | 18.3424 | 27000 | 0.7792 | 0.4653 | 0.3760 | 0.3833 | 0.4810 |
| 0.8291 | 19.0217 | 28000 | 0.7755 | 0.4636 | 0.3806 | 0.3889 | 0.4774 |
| 0.8287 | 19.7011 | 29000 | 0.7754 | 0.4759 | 0.3861 | 0.3951 | 0.4762 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0.dev20241112+cu121
- Datasets 4.0.0
- Tokenizers 0.21.4
|
aralper18/blockassist-bc-gilded_tangled_albatross_1755690855
|
aralper18
| 2025-08-20T11:55:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gilded tangled albatross",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:55:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gilded tangled albatross
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MattBou00/llama-3-2-1b-detox_v1f_round1-checkpoint-epoch-40
|
MattBou00
| 2025-08-20T11:55:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-08-20T11:54:24Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-08-20_11-48-41/checkpoints/checkpoint-epoch-40")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-08-20_11-48-41/checkpoints/checkpoint-epoch-40")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-08-20_11-48-41/checkpoints/checkpoint-epoch-40")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
dejanseo/chrome_models
|
dejanseo
| 2025-08-20T11:55:12Z | 899 | 8 | null |
[
"tflite",
"TensorFlow Lite v3",
"region:us"
] | null | 2024-11-07T00:35:28Z |
---
tags:
- TensorFlow Lite v3
---
# A Collection of Google's On-Device Models
## Help us complete the list
- To contribute go to C:\Users\YOUR_PC_USER\AppData\Local\Google\Chrome\User Data\optimization_guide_model_store
- If you find a new non-empty folder not listed [here](https://huggingface.co/dejanseo/chrome_models/upload/main) please [upload it to this repo](https://huggingface.co/dejanseo/chrome_models/upload/main)
## List of All Available Models
Following is the complete list of machine learning models in Chrome many of which are on your device. They are located in your User Data folder and you can easily check to see which ones you have as they are all in numbered folders.
# Mapping of folder names to optimization target descriptions
```
# Mapping of folder names to optimization target descriptions
OPTIMIZATION_TARGETS = {
"0": "OPTIMIZATION_TARGET_UNKNOWN",
"1": "OPTIMIZATION_TARGET_PAINFUL_PAGE_LOAD",
"2": "OPTIMIZATION_TARGET_LANGUAGE_DETECTION",
"3": "OPTIMIZATION_TARGET_PAGE_TOPICS",
"4": "OPTIMIZATION_TARGET_SEGMENTATION_NEW_TAB",
"5": "OPTIMIZATION_TARGET_SEGMENTATION_SHARE",
"6": "OPTIMIZATION_TARGET_SEGMENTATION_VOICE",
"7": "OPTIMIZATION_TARGET_MODEL_VALIDATION",
"8": "OPTIMIZATION_TARGET_PAGE_ENTITIES",
"9": "OPTIMIZATION_TARGET_NOTIFICATION_PERMISSION_PREDICTIONS",
"10": "OPTIMIZATION_TARGET_SEGMENTATION_DUMMY",
"11": "OPTIMIZATION_TARGET_SEGMENTATION_CHROME_START_ANDROID",
"12": "OPTIMIZATION_TARGET_SEGMENTATION_QUERY_TILES",
"13": "OPTIMIZATION_TARGET_PAGE_VISIBILITY",
"15": "OPTIMIZATION_TARGET_PAGE_TOPICS_V2",
"16": "OPTIMIZATION_TARGET_SEGMENTATION_CHROME_LOW_USER_ENGAGEMENT",
"17": "OPTIMIZATION_TARGET_SEGMENTATION_FEED_USER",
"18": "OPTIMIZATION_TARGET_CONTEXTUAL_PAGE_ACTION_PRICE_TRACKING",
"19": "OPTIMIZATION_TARGET_TEXT_CLASSIFIER",
"20": "OPTIMIZATION_TARGET_GEOLOCATION_PERMISSION_PREDICTIONS",
"21": "OPTIMIZATION_TARGET_SEGMENTATION_SHOPPING_USER",
"22": "OPTIMIZATION_TARGET_SEGMENTATION_CHROME_START_ANDROID_V2",
"23": "OPTIMIZATION_TARGET_SEGMENTATION_SEARCH_USER",
"24": "OPTIMIZATION_TARGET_OMNIBOX_ON_DEVICE_TAIL_SUGGEST",
"25": "OPTIMIZATION_TARGET_CLIENT_SIDE_PHISHING",
"26": "OPTIMIZATION_TARGET_OMNIBOX_URL_SCORING",
"27": "OPTIMIZATION_TARGET_SEGMENTATION_DEVICE_SWITCHER",
"28": "OPTIMIZATION_TARGET_SEGMENTATION_ADAPTIVE_TOOLBAR",
"29": "OPTIMIZATION_TARGET_SEGMENTATION_TABLET_PRODUCTIVITY_USER",
"30": "OPTIMIZATION_TARGET_CLIENT_SIDE_PHISHING_IMAGE_EMBEDDER",
"31": "OPTIMIZATION_TARGET_NEW_TAB_PAGE_HISTORY_CLUSTERS_MODULE_RANKING",
"32": "OPTIMIZATION_TARGET_WEB_APP_INSTALLATION_PROMO",
"33": "OPTIMIZATION_TARGET_TEXT_EMBEDDER",
"34": "OPTIMIZATION_TARGET_VISUAL_SEARCH_CLASSIFICATION",
"35": "OPTIMIZATION_TARGET_SEGMENTATION_BOTTOM_TOOLBAR",
"36": "OPTIMIZATION_TARGET_AUTOFILL_FIELD_CLASSIFICATION",
"37": "OPTIMIZATION_TARGET_SEGMENTATION_IOS_MODULE_RANKER",
"38": "OPTIMIZATION_TARGET_SEGMENTATION_DESKTOP_NTP_MODULE",
"39": "OPTIMIZATION_TARGET_PRELOADING_HEURISTICS",
"40": "OPTIMIZATION_TARGET_TEXT_SAFETY",
"41": "OPTIMIZATION_TARGET_SEGMENTATION_ANDROID_HOME_MODULE_RANKER",
"42": "OPTIMIZATION_TARGET_COMPOSE",
"43": "OPTIMIZATION_TARGET_PASSAGE_EMBEDDER",
"44": "OPTIMIZATION_TARGET_PHRASE_SEGMENTATION",
"45": "OPTIMIZATION_TARGET_SEGMENTATION_COMPOSE_PROMOTION",
"46": "OPTIMIZATION_TARGET_URL_VISIT_RESUMPTION_RANKER",
"47": "OPTIMIZATION_TARGET_CAMERA_BACKGROUND_SEGMENTATION",
"48": "OPTIMIZATION_TARGET_MODEL_EXECUTION_FEATURE_HISTORY_SEARCH",
"49": "OPTIMIZATION_TARGET_MODEL_EXECUTION_FEATURE_PROMPT_API",
"50": "OPTIMIZATION_TARGET_SEGMENTATION_METRICS_CLUSTERING",
"51": "OPTIMIZATION_TARGET_MODEL_EXECUTION_FEATURE_SUMMARIZE",
"52": "OPTIMIZATION_TARGET_PASSWORD_MANAGER_FORM_CLASSIFICATION",
"53": "OPTIMIZATION_TARGET_NOTIFICATION_CONTENT_DETECTION",
"54": "OPTIMIZATION_TARGET_MODEL_EXECUTION_FEATURE_HISTORY_QUERY_INTENT",
"55": "OPTIMIZATION_TARGET_MODEL_EXECUTION_FEATURE_SCAM_DETECTION",
"56": "OPTIMIZATION_TARGET_MODEL_EXECUTION_FEATURE_PERMISSIONS_AI",
"57": "OPTIMIZATION_TARGET_EXPERIMENTAL_EMBEDDER",
"58": "OPTIMIZATION_TARGET_SEGMENTATION_FEDCM_USER",
"59": "OPTIMIZATION_TARGET_MODEL_EXECUTION_FEATURE_WRITING_ASSISTANCE_API",
"60": "OPTIMIZATION_TARGET_GEOLOCATION_IMAGE_PERMISSION_RELEVANCE",
"61": "OPTIMIZATION_TARGET_NOTIFICATION_IMAGE_PERMISSION_RELEVANCE",
"62": "OPTIMIZATION_TARGET_MODEL_EXECUTION_FEATURE_PROOFREADER_API",
"63": "OPTIMIZATION_TARGET_SEGMENTATION_IOS_DEFAULT_BROWSER_PROMO",
"64": "OPTIMIZATION_TARGET_EDU_CLASSIFIER",
"65": "OPTIMIZATION_TARGET_PERMISSIONS_AIV4_GEOLOCATION_DESKTOP",
"66": "OPTIMIZATION_TARGET_PERMISSIONS_AIV4_NOTIFICATIONS_DESKTOP",
"67": "OPTIMIZATION_TARGET_GENERALIZED_SAFETY"
}
```
Source: [DEJAN](https://dejan.ai/blog/chrome-ai-models/)
Gemini Nano Download Link: Intercepted model download URL: http://edgedl.me.gvt1.com/edgedl/release2/chrome_component/adhtst3uf2cltjrk6xr625t2jwbq_2024.9.25.2033/fklghjjljmnfjoepjmlobpekiapffcja_2024.9.25.2033_all_adzzukuhpsemphsujkjgzvmtrunq.crx3
|
LOLFUNNYLOLFUNNY/clean-rouwei-0.8.0
|
LOLFUNNYLOLFUNNY
| 2025-08-20T11:54:00Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"aesthetic",
"anatomy",
"versality",
"vibrant",
"stability",
"illustrious",
"en",
"base_model:Minthy/RouWei-0.8",
"base_model:finetune:Minthy/RouWei-0.8",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-20T11:47:37Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- aesthetic
- anatomy
- versality
- vibrant
- stability
- illustrious
base_model: Minthy/RouWei-0.8
---
Original model is [here](https://civitai.com/models/1830109).
The author is [here](https://huggingface.co/RedRayz).
This model created by [RedRayz](https://civitai.com/user/RedRayz).
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755689209
|
sampingkaca72
| 2025-08-20T11:52:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:52:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
2hpsatt/blockassist-bc-huge_deft_eagle_1755690679
|
2hpsatt
| 2025-08-20T11:52:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:52:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lautan/blockassist-bc-gentle_patterned_goat_1755688942
|
lautan
| 2025-08-20T11:51:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle patterned goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:51:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle patterned goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nema122/blockassist-bc-furry_rugged_camel_1755690489
|
nema122
| 2025-08-20T11:49:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"furry rugged camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:48:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- furry rugged camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kevinshin/test-run-fsdp-v1
|
kevinshin
| 2025-08-20T11:48:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"alignment-handbook",
"trl",
"conversational",
"dataset:kevinshin/wildchat-creative-writing-3k-critique",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T16:19:14Z |
---
base_model: Qwen/Qwen3-1.7B
datasets: kevinshin/wildchat-creative-writing-3k-critique
library_name: transformers
model_name: test-run-fsdp-v1
tags:
- generated_from_trainer
- sft
- alignment-handbook
- trl
licence: license
---
# Model Card for test-run-fsdp-v1
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) on the [kevinshin/wildchat-creative-writing-3k-critique](https://huggingface.co/datasets/kevinshin/wildchat-creative-writing-3k-critique) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kevinshin/test-run-fsdp-v1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/myungjune-sogang-university/general_remo_train/runs/mcheszoc)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.54.0
- Pytorch: 2.6.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
manusiaperahu2012/blockassist-bc-roaring_long_tuna_1755688737
|
manusiaperahu2012
| 2025-08-20T11:45:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring long tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:45:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring long tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aleebaster/blockassist-bc-sly_eager_boar_1755688803
|
aleebaster
| 2025-08-20T11:45:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:45:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755688643
|
vwzyrraz7l
| 2025-08-20T11:44:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:44:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sarayusapa/T5_Large_QLoRA
|
sarayusapa
| 2025-08-20T11:42:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T07:37:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755688597
|
lisaozill03
| 2025-08-20T11:42:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:42:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755688931
|
Sayemahsjn
| 2025-08-20T11:41:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:41:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755688499
|
quantumxnode
| 2025-08-20T11:40:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:40:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
EmilRyd/gpt-oss-ground-truth-20
|
EmilRyd
| 2025-08-20T11:38:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T11:33:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1755689548
|
canoplos112
| 2025-08-20T11:34:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:33:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
syuvers/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-nasty_tall_sardine
|
syuvers
| 2025-08-20T11:32:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am nasty_tall_sardine",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T08:47:40Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am nasty_tall_sardine
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755687851
|
indoempatnol
| 2025-08-20T11:30:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:30:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Jr12lm12/Meta-Llama-3.1-8B-De_En_Solar-QLORA
|
Jr12lm12
| 2025-08-20T11:30:34Z | 49 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-07-20T20:38:19Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Jr12lm12
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Connexus/mt5-multitask-adapter-v1
|
Connexus
| 2025-08-20T11:29:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T11:29:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
timpal0l/Mistral-7B-v0.1-flashback-v2
|
timpal0l
| 2025-08-20T11:28:33Z | 794 | 9 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"pretrained",
"flashback",
"web",
"conversational",
"sv",
"en",
"no",
"da",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-04T10:31:37Z |
---
language:
- sv
- en
- 'no'
- da
license: mit
tags:
- pretrained
- flashback
- web
- conversational
models:
- timpal0l/Mistral-7B-v0.1-flashback-v2-instruct
pipeline_tag: text-generation
widget:
- text: Jag tycker att det är roligt med
model-index:
- name: Mistral-7B-v0.1-flashback-v2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 57.17
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=timpal0l/Mistral-7B-v0.1-flashback-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 80.74
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=timpal0l/Mistral-7B-v0.1-flashback-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=timpal0l/Mistral-7B-v0.1-flashback-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 40.66
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=timpal0l/Mistral-7B-v0.1-flashback-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.19
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=timpal0l/Mistral-7B-v0.1-flashback-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 29.42
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=timpal0l/Mistral-7B-v0.1-flashback-v2
name: Open LLM Leaderboard
---
# 🐈⬛ Mistral-7B-v0.1-flashback-v2

Mistral-7B-v0.1-flashback-v2 is a continuation of the pretraining process for the base Mistral-7B-v0.1 model, utilizing 2 251 233 forum threads from the Swedish website https://www.flashback.org/. Which is rougly 40GB of text.
It is a full finetune for one epoch.
* GGUF Version available [**Here**](https://huggingface.co/timpal0l/Mistral-7B-v0.1-flashback-v2-GGUF)
* Instruct version [**Here**](https://huggingface.co/timpal0l/Mistral-7B-v0.1-flashback-v2-instruct)
## How to use:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "timpal0l/Mistral-7B-v0.1-flashback-v2"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.eval()
model.to(device)
prompt = "Idag är det den bästa"
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device)
generated_token_ids = model.generate(
inputs=input_ids,
max_new_tokens=256,
do_sample=True,
temperature=0.8,
top_p=1,
)[0]
generated_text = tokenizer.decode(generated_token_ids)
generated_text
```
```
<s> Idag är det den bästa dagen i hela veckan, för nu tar det slut!\n\n>! Gnällfesten!\n\nJag sitter här, oerhört förvirrad, och försöker förstå varför vi ens måste fortsätta att existera efter döden. Jag menar, jag förstår ju egentligen att det aldrig kan ta slut, eller inte "ta slut" i den bemärkelsen att materian försvinner, men det är inte det jag pratar om.\n\nDöden, det faktum att man dör och aldrig kan uppleva livet igen. Det som är liv och ger livet en mening, det försvinner i döden. Och sen börjas det om, om och om igen. Varför behöver vi så många liv? Vi är ju inte ens medvetna av att vi någonsin har levt, så varför ska vi komma hit och bli medvetna hela tiden?\n\nDet här är en sådan fråga som jag aldrig kan få
```
## Data Format:
To mimic the data format used in pre-training it has the following structure:
```html
# Thread_Title
username_thread_creator:
Hello, this is my thread...
username_user_1:
This is a response to the thread, without qouting anything.
username_user_2:
> username_user_1: This is a response to the thread, without qouting anything.
I am now quoting username_user_1...
```
### Random training sample:
```html
# Tips om aktiviter och sevärdheter i Stockholm för någon med funktionsnedsättning
Roozbeh:
Hej!
Jag jobbar som assistent åt en kille på ett stödboende.
Nästa vecka åker han, jag och en kollega till Stockholm och han är superpeppad på att se sig omkring.
Har ni några guld tips?
Får gärna ge förslag både dag och kvällstid om ni kommer på något.
Vi har redan tänkt på att se slottet.
Och gamla staden, finns där något kanske?
Bra cafen/restauranger som inte är allt för dyra.
Några ställen som man bara måste se eller göra i Stockholm?
Han är inte rullstolsbunden ska nämnas, är ung och i ganska bra kondition fysiskt.
Alla tips är välkomna tack!
Annéa:
Beror lite på vad man gillar. Om ni ändå är vi Slottet så har ni ju dom stora turistgatorna i Gamla Stan runt hörnet precis, dock inget ställe man vill gå på om man tycker det är jobbigt med folk och att trängas och ingenstans där man äter särskilt bra eller billigt.
Laust:
Åka upp på globen funkar med rullstol
Thomaz:
Välkomna! 🙂
Vad har han för intressen?
Är ni ändå på slottet kan jag rekommendera livrustkammaren, där kläder och attiraljer såsom vagnar (och även uppstoppade hästar) från svenska kungligheter är utställda.
Anne-Jorunn:
Gröna Lund och skansen är guld, om hen klarar av att åka karusell så går ni också förbi alla köer om du är stödperson.
Abba museumet, Vasamuseumet, militärhistoriska museet, tekniska museet, Junibacken. Finns mycket bra.
Annars kan det vara skoj att gå runt på Mall of Scandinavia, skönt att vara inne med toaletter inom räckhåll.
Muscab:
> Roozbeh: Hej!
>
> Jag jobbar som assistent åt en kille på ett stödboende.
> Nästa vecka åker han, jag och en kollega till Stockholm och han är superpeppad på att se sig omkring.
> Har ni några guld tips?
> Får gärna ge förslag både dag och kvällstid om ni kommer på något.
> Vi har redan tänkt på att se slottet.
> Och gamla staden, finns där något kanske?
> Bra cafen/restauranger som inte är allt för dyra.
> Några ställen som man bara måste se eller göra i Stockholm?
> Han är inte rullstolsbunden ska nämnas, är ung och i ganska bra kondition fysiskt.
> Alla tips är välkomna tack!
Jag tror de mesta platser är ganska ovänliga för rullstol. Backar, grusvägar, kullersten, trånga dörrar, trappor. Finns det någon restaurang/café som är billig och rullstolsvänlig? Vet inte. Köp ett paket glassar på ica istället.
Något man måste göra i Stockholm? Det finns inte mycket att se. Turister brukade gå runt i gamla stan och titta på tunnelbanestationer.
Annéa:
> Muscab: Jag tror de mesta platser är ganska ovänliga för rullstol. Backar, grusvägar, kullersten, trånga dörrar, trappor. Finns det någon restaurang/café som är billig och rullstolsvänlig? Vet inte. Köp ett paket glassar på ica istället.
>
> Något man måste göra i Stockholm? Det finns inte mycket att se. Turister brukade gå runt i gamla stan och titta på tunnelbanestationer.
Han sitter ju INTE i rullstol...
Tharsika:
Vad har han för problematik? Vad kan störa/vara svårt för honom ? Rullstol ? Kramp? Utåtagerande ?
Muscab:
> Annéa: Han sitter ju INTE i rullstol...
Läste fel. 🤦
Boine:
Armémuseum
Historiska museet
Åka djurgårdsfärjan alt. ”Skärgårdstur” med SL
Utsikt på Södermalm + promenaden dit. Mariaberget & Monteliusvägen
Gamla stan - Mårten Trotzig gränd samt kanonkulorna i husväggen några meter från Stortorget
Målningar i tunnelbanan
Spela äventyrsgolf inomhus
Se guldbron - Slussen
Utsikt Katarinahissen - Slussen, man går in i porten till Gondolen (nog nerlagd) tar hissen längst upp och går en våning upp annars får man gå dit bakvägen onödigt långt.
Gå hela Drottninggatan
Slottet ev tajma in vaktavlösning
Kolla om det finns något personen har intresse av/om, finns en hel gratis museum
Roozbeh:
Vilka bra tips! Tack allihopa vad fint av er att bidra! Så uppskattat verkligen 🙂
Nu är vi åter hemma igen efter resan till Stockholm.
Resan gick jättebra, vi planerade noga och gjorde det mesta av tid med hänsyn till funktionsnedsättningen. Vi gick såklart efter vad han själv önskade göra och gav förslag på vad Stockholm erbjuder. Då vi bara var i Stockholm under ca 24 timmar måste jag säga att vi fick gjort mycket mer än vi väntade oss. Vi hade ingen bil. Istället köpte vi ett 24 tim kort för kollektivtrafiken och med hjälp av SL appen och google maps navigerade jag runt oss i staden.
Hotellet vi bodde på låg nära Centralstationen.
Detta gjorde vi:
Gick runt hela Gamla Stan. Åt på restaurang där samt i Vasaplan och även fikade på diverse caféer i Gamla Stan. Vi såg det Kungliga slottet både inuti och utanpå, var uppskattat! Han tyckte det var så häftigt. Strosade runt i alla gränder, torg och gator i Gamla Stan, gick in i trevliga små butiker och tog fina foton! Vi tittade på alla båtar i hamnen. Parlamentet. Stadshuset. Vi gick in på diverse olika ställen vi gick förbi som han impulsivt kände dragning till. Typ karaokebar, kulturhuset, pubbar etc. Allt han kände för gjorde vi. Det var hans resa 100 %.
Åkte med färja till Djurgården och besökte ABBA museet där han fick lyssna på sånger, se rekvisita, sjunga och t.om åka helikopter i VR.
Vi shoppade också såklart då Stockholm har så många butiker!(Hela Drottninggatan och ställen på/nära Vasaplan)
Under resan interagerade han med en massa Stockholmare. Sade till flertalet tjejer att han älskade dom haha vilket charmör! Vi gick förbi en högvakt vid slottet som han hälsade på. Det var en hon, och vakten rörde inte en min men följde honom med blicken. Givetvis fick vi säga det att dom inte pratar med någon då det ingår i jobbet etc.
Han blev bemött med respekt och ömhet av de flesta ska sägas. Han var glad över att ha fått prata med så många människor. Vi stannade ofta då han ville fråga t.ex poliser eller andra arbetare om saker, alla var gulliga och vänliga mot honom.
Vi åkte under resan buss, tunnelbana(också en önskan att få göra) och färjor till olika färjterminaler för att få se Stockholm från vattnet.
Såg också Sergels Torg på kvällen eller "Plattan" som jag tror den också kallas. En pelare var vackert upplyst i blått ljus där och han berättade exalterat om hur många filmer han sett som har plattan som scenplats etc. Kvällen bjöd på solnedgången från hotellets tak. Åt en fantastisk frukostbuffé på morgonen med flera omgångar god mat! Härligt att han njöt.
Då han faktiskt har en fysisk och kognitiv nedsättning är vi så glada att han orkade så mycket! Bäst av allt sa han sig vara väldigt nöjd med resan. Vi ska nu planera fler resor till Stockholm i framtiden. Då gör vi fler saker, sånt vi inte hann med den här gången. Var lite begränsat med tid(24 timmar) samt behövde vi tänka på att energi skulle räcka till utan att kroppen skulle triggas till att hans nedsättnings symptom blossade upp. Behövs ju givetvis pauser med jämna mellanrum då.
Tack och lov för apparna som jag kunde leda oss efter. Att åka kollektivt hade varit svårt annars och jag kunde se efter kartan var våra besöksmål låg samt vilka vägar som kunde spara oss onödig tid.
Tack ska ni ha för tipsen, igen. Tack till Stockholm för att ni tog emot oss med respekt han var så nöjd med resan.
Hej så länge, vi kommer åter i framtiden! 😁
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_timpal0l__Mistral-7B-v0.1-flashback-v2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |57.53|
|AI2 Reasoning Challenge (25-Shot)|57.17|
|HellaSwag (10-Shot) |80.74|
|MMLU (5-Shot) |59.98|
|TruthfulQA (0-shot) |40.66|
|Winogrande (5-shot) |77.19|
|GSM8k (5-shot) |29.42|
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755689183
|
lqpl
| 2025-08-20T11:28:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:27:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thedeoxen/FLUX.1-Kontext-dev-reference-depth-fusion-LORA
|
thedeoxen
| 2025-08-20T11:27:49Z | 0 | 23 |
diffusers
|
[
"diffusers",
"flux",
"depth",
"controlnet",
"kontext",
"flux-kontext",
"img2img",
"image",
"editing",
"lora",
"image-to-image",
"base_model:black-forest-labs/FLUX.1-Kontext-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Kontext-dev",
"license:apache-2.0",
"region:us"
] |
image-to-image
| 2025-08-17T11:11:19Z |
---
license: apache-2.0
base_model:
- black-forest-labs/FLUX.1-Kontext-dev
pipeline_tag: image-to-image
tags:
- flux
- depth
- controlnet
- kontext
- flux-kontext
- img2img
- image
- editing
- lora
library_name: diffusers
---
---
license: apache-2.0
base_model:
- black-forest-labs/FLUX.1-Kontext-dev
pipeline_tag: image-to-image
---
# Depth Reference Fusion LoRA
## 📝 Short description
A LoRA for **Flux Kontext Dev** that fuses a **reference image (left)** with a **depth map (right)**.
It preserves **identity and style** from the reference while following the pose and structure from the depth map.
**Trigger word:** `redepthkontext`
[Demo Video](https://youtu.be/cCwoPOkxq5c?si=T4tr3MWi7EUNJXIW)


---
### Example 2


---
### Example 3


---
## 📖 Extended description
This LoRA was primarily trained on **humans**, but it also works with **objects**.
Its main purpose is to **preserve identity** — facial features, clothing, or object characteristics — from the reference image, while adapting them to the pose and composition defined by the depth map.
---
## ⚙️ How to use
- Concatenate two images side by side:
- **Left:** reference image (person or object)
- **Right:** depth map (grayscale or silhouette)
- Add the trigger word **`redepthkontext`** in your prompt.
### ✅ Example prompt
redepthkontext change depth map to photo
---
## 🎯 What it does
- Preserves character or object **identity** across generations.
- Embeds the subject into the new pose/scene defined by the depth map.
- Works best when the depth map has **similar proportions and sizes** to the reference.
---
## ⚡ Tips
- Works better if the depth map is not drastically different in object scale.
- Can be combined with text prompts for additional background/environment control.
---
## 📌 Use cases
- Human portraits in different poses.
- Consistent character design across multiple scenes.
- Object transformations (cars, furniture, props) with depth-guided placement.
- Storyboarding, comics, or animation frame generation.
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1755689099
|
canoplos112
| 2025-08-20T11:26:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:25:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755687617
|
helmutsukocok
| 2025-08-20T11:25:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:25:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xGareeb/blockassist-bc-diving_jumping_llama_1755689002
|
0xGareeb
| 2025-08-20T11:25:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving jumping llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:24:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving jumping llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Emmett1234/Aug2025v1
|
Emmett1234
| 2025-08-20T11:24:50Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-20T10:52:44Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Emmett
---
# Aug2025V1
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Emmett` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Emmett",
"lora_weights": "https://huggingface.co/Emmett1234/Aug2025v1/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Emmett1234/Aug2025v1', weight_name='lora.safetensors')
image = pipeline('Emmett').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2454
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Emmett1234/Aug2025v1/discussions) to add images that show off what you’ve made with this LoRA.
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755687424
|
ihsanridzi
| 2025-08-20T11:24:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:24:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sekirr22/blockassist-bc-robust_fluffy_ram_1755688946
|
sekirr22
| 2025-08-20T11:24:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"robust fluffy ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:24:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- robust fluffy ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sdagsadgd/blockassist-bc-sedate_squeaky_salamander_1755685850
|
sdagsadgd
| 2025-08-20T11:23:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sedate squeaky salamander",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:23:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sedate squeaky salamander
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chainway9/blockassist-bc-untamed_quick_eel_1755687300
|
chainway9
| 2025-08-20T11:21:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:21:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Jawaker/t5-base-tcp-top-div
|
Jawaker
| 2025-08-20T11:20:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T11:20:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755687170
|
sampingkaca72
| 2025-08-20T11:19:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:19:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
EposLabs/Qwen2.5-7B-Cross-P-Wiki
|
EposLabs
| 2025-08-20T11:16:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T11:16:24Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** BizarreCake
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755688530
|
kapalbalap
| 2025-08-20T11:16:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:16:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755686952
|
hakimjustbao
| 2025-08-20T11:16:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:16:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nema122/blockassist-bc-furry_rugged_camel_1755688492
|
nema122
| 2025-08-20T11:15:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"furry rugged camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:15:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- furry rugged camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
video-de-abigail-lalama-y-snayder-filtrado/new.video.de.abigail.lalama.y.snayder.filtrado.en.telegram.se.vuelve.viral
|
video-de-abigail-lalama-y-snayder-filtrado
| 2025-08-20T11:15:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T11:15:07Z |
<a href="https://sdu.sk/AyL"><img src="https://files.qatarliving.com/event/2025/06/20/Jawan69_0-1749987397680.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝙨𝙞𝙜𝙣 𝙪𝙥 𝙖𝙣𝙙 𝙬𝙖𝙩𝙘𝙝 𝙛𝙪𝙡𝙡 𝙫𝙞𝙙𝙚𝙤 𝙃𝘿)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤)</a>
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755686934
|
kojeklollipop
| 2025-08-20T11:15:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:15:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
EmilRyd/gpt-oss-ground-truth-10
|
EmilRyd
| 2025-08-20T11:15:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T11:09:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
manusiaperahu2012/blockassist-bc-roaring_long_tuna_1755686649
|
manusiaperahu2012
| 2025-08-20T11:13:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring long tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:13:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring long tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
unitova/blockassist-bc-zealous_sneaky_raven_1755686704
|
unitova
| 2025-08-20T11:12:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:12:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kavpro/blockassist-bc-tall_lively_caribou_1755684605
|
kavpro
| 2025-08-20T11:10:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall lively caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:10:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall lively caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
atrubertss/MyGemmaNPC
|
atrubertss
| 2025-08-20T11:09:56Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T11:03:03Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="atrubertss/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755688098
|
kapalbalap
| 2025-08-20T11:09:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:09:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755688046
|
Ferdi3425
| 2025-08-20T11:08:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:08:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755686583
|
lisaozill03
| 2025-08-20T11:08:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:08:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aralper18/blockassist-bc-gilded_tangled_albatross_1755688032
|
aralper18
| 2025-08-20T11:07:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gilded tangled albatross",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:07:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gilded tangled albatross
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ver-intimo-videos-de-abigail-lalama/Orginal.video.filtrado.de.Abigail.Lalama.y.Snayder.en.twitter.y.telegram
|
ver-intimo-videos-de-abigail-lalama
| 2025-08-20T11:07:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T11:07:10Z |
<a href="https://sdu.sk/AyL"><img src="https://files.qatarliving.com/event/2025/06/20/Jawan69_0-1749987397680.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝙨𝙞𝙜𝙣 𝙪𝙥 𝙖𝙣𝙙 𝙬𝙖𝙩𝙘𝙝 𝙛𝙪𝙡𝙡 𝙫𝙞𝙙𝙚𝙤 𝙃𝘿)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤)</a>
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755686422
|
quantumxnode
| 2025-08-20T11:06:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:06:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Orginal-vietnamese-viral-video-clip-link/Hot.New.full.videos.vietnamese.Viral.Video.Official.Tutorial
|
Orginal-vietnamese-viral-video-clip-link
| 2025-08-20T11:06:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T11:06:10Z |
<a href="https://sdu.sk/AyL"><img src="https://files.qatarliving.com/event/2025/06/20/Jawan69_0-1749987397680.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝙨𝙞𝙜𝙣 𝙪𝙥 𝙖𝙣𝙙 𝙬𝙖𝙩𝙘𝙝 𝙛𝙪𝙡𝙡 𝙫𝙞𝙙𝙚𝙤 𝙃𝘿)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤)</a>
|
Arpita1/sbs_convai2_llama3.1_lora
|
Arpita1
| 2025-08-20T11:06:02Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2508.06886",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | null | 2025-08-20T11:03:53Z |
---
license: llama3.1
language:
- en
base_model:
- meta-llama/Llama-3.1-8B-Instruct
---
# Model Card
### Description
Llama-3.1-8B-Instruct finetuned on [ConvAI2](https://parl.ai/projects/convai2/) using the [SBS framework](https://arpita2512.github.io/score_before_you_speak/).
- **Repository:** [GitHub](https://github.com/arpita2512/score_before_you_speak)
- **Paper:** [https://arxiv.org/abs/2508.06886](https://arxiv.org/abs/2508.06886)
- **Funded by:** UKRI AI-Medical CDT (Grant Reference: EP/S024336/1)
- **Language(s) (NLP):** English
- **License:** Llama 3.1
**Note:** These are just the LoRA weights and need to be merged with the Llama-3.1-8B-Instruct model before use.
## BibTeX
```
@inproceedings{saggar2025,
author = {Saggar, Arpita and Darling, Jonathan C. and Dimitrova, Vania and Sarikaya, Duygu and Hogg, David C.},
title = {Score Before You Speak: Improving Persona Consistency in Dialogue Generation using Response Quality Scores},
booktitle = {Proceedings of the 28th European Conference on Artificial Intelligence},
year = {2025},
}
```
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755687818
|
kapalbalap
| 2025-08-20T11:04:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:04:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755687796
|
Ferdi3425
| 2025-08-20T11:04:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:03:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ver-video-intimo-de-abigail-lalama-clip/ver.filtrado.Video.de.Abigail.Lalama.y.Snayder
|
ver-video-intimo-de-abigail-lalama-clip
| 2025-08-20T11:03:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T11:01:19Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Abigail Lalama viral: influencer confirma video con Snayder en Telegram
El nombre Abigail Lalama se hace viral y crece en Telegram y X tras la filtración de su video con Snayder. Te contamos quiénes son y qué pasó.
Imagen de Abigail Lalama viral: influencer confirma video con Snayder en Telegram
Abigail confirmó la filtración del contenido. - Foto: Instagram abigail_lalama
El nombre Abigail Lalama se hizo viral en redes sociales como Telegram y Twitter (ahora X) luego de que se confirmara la filtración de un video íntimo con Snayder. Esto provocó un repunte de búsquedas como “Abigail Lalama video filtrado”, “Video de Abigail y Snayder filtrado”, “Abigail Lalama Telegram”, entre otras.
La confirmación llegó directamente de la influencer, lo que desencadenó una ola de comentarios, reacciones y solidaridad en línea. A continuación, te contamos quién es Abigail Lalama, quién es Snayder, qué dijo ella, y por qué este caso se viralizó tan rápido.
LEA TAMBIÉN: ¿Qué dijo Mariana Botas de Drake Bell?, ¿qué contestó el cantante?
View post on TikTok
¿Quién es Abigail Lalama y por qué es conocida?
Abigail Lalama es una creadora de contenido ecuatoriana de 22 años, originaria de Guayaquil, que ha ganado popularidad principalmente en TikTok, donde comparte lives, retos, momentos familiares y contenidos cotidianos junto a su hermana gemela, Génesis.
Su comunidad se ha consolidado gracias a su cercanía, su carisma y su estilo familiar. En TikTok, su cuenta @laoficialabigail cuenta con más de 400 000 seguidores y en Instagram supera los 173,000. Con su hermana, forma el ‘Team Lalama’, difundiendo contenidos centrados en diario vivir, maternidad y lazos familiares.
¿Quién es Snayder y qué video se filtró?
La identidad precisa de Snayder no ha sido desvelada en los medios principales consultados hasta el momento, pero se sabe que es un conocido en el entorno de Abigail, supuestamente su expareja.
El video filtrado, descrito como íntimo, fue subido a plataformas como Telegram y TikTok. Usuarios reportaron que la joven que aparece en la grabación viral compartía tatuajes y rasgos con Abigail Lalama. Según Abigail, el video circuló sin su consentimiento, y la filtración se la atribuye a dicho ex.
View post on TikTok
¿Qué dijo Abigail Lalama sobre la filtración?
En un video en vivo, visiblemente afectada y entre lágrimas, Abigail Lalama confirmó la filtración viral del contenido. Ella acusó directamente a su expareja de haber difundido ese material íntimo con la intención de incomodar a su nueva relación.
|
Arpita1/sbs_personachat_llama3.1_lora
|
Arpita1
| 2025-08-20T11:03:08Z | 0 | 0 | null |
[
"safetensors",
"en",
"arxiv:2508.06886",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | null | 2025-08-20T10:40:54Z |
---
license: llama3.1
language:
- en
base_model:
- meta-llama/Llama-3.1-8B-Instruct
---
# Model Card
### Description
Llama-3.1-8B-Instruct finetuned on [PersonaChat](https://parl.ai/projects/personachat/) using the [SBS framework](https://arpita2512.github.io/score_before_you_speak/).
- **Repository:** [GitHub](https://github.com/arpita2512/score_before_you_speak)
- **Paper:** [https://arxiv.org/abs/2508.06886](https://arxiv.org/abs/2508.06886)
- **Funded by:** UKRI AI-Medical CDT (Grant Reference: EP/S024336/1)
- **Language(s) (NLP):** English
- **License:** Llama 3.1
**Note:** These are just the LoRA weights and need to be merged with the Llama-3.1-8B-Instruct model before use.
## BibTeX
```
@inproceedings{saggar2025,
author = {Saggar, Arpita and Darling, Jonathan C. and Dimitrova, Vania and Sarikaya, Duygu and Hogg, David C.},
title = {Score Before You Speak: Improving Persona Consistency in Dialogue Generation using Response Quality Scores},
booktitle = {Proceedings of the 28th European Conference on Artificial Intelligence},
year = {2025},
}
```
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755686028
|
coelacanthxyz
| 2025-08-20T11:01:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:01:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.