modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Negark/bert-sentiment-digikala-augmented-WithTokens
|
Negark
| 2025-08-20T15:09:25Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:HooshvareLab/bert-fa-base-uncased-sentiment-digikala",
"base_model:finetune:HooshvareLab/bert-fa-base-uncased-sentiment-digikala",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T13:57:51Z |
---
library_name: transformers
license: apache-2.0
base_model: HooshvareLab/bert-fa-base-uncased-sentiment-digikala
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: bert-sentiment-digikala-augmented-WithTokens
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-sentiment-digikala-augmented-WithTokens
This model is a fine-tuned version of [HooshvareLab/bert-fa-base-uncased-sentiment-digikala](https://huggingface.co/HooshvareLab/bert-fa-base-uncased-sentiment-digikala) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8273
- Accuracy: 0.8213
- F1: 0.8201
- Precision: 0.8211
- Recall: 0.8197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.5772 | 1.0 | 986 | 0.4229 | 0.8065 | 0.8057 | 0.8086 | 0.8049 |
| 0.3363 | 2.0 | 1972 | 0.4512 | 0.8168 | 0.8163 | 0.8187 | 0.8155 |
| 0.2009 | 3.0 | 2958 | 0.5756 | 0.8122 | 0.8097 | 0.8140 | 0.8103 |
| 0.1264 | 4.0 | 3944 | 0.8273 | 0.8213 | 0.8201 | 0.8211 | 0.8197 |
| 0.0762 | 5.0 | 4930 | 1.0302 | 0.8139 | 0.8130 | 0.8151 | 0.8123 |
| 0.0464 | 6.0 | 5916 | 1.2564 | 0.8116 | 0.8097 | 0.8098 | 0.8096 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755702380
|
0xaoyama
| 2025-08-20T15:06:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T15:06:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
stanpony/gptnano_5M_lexinvariant_2_epochs_then_vanilla_20250820_121428
|
stanpony
| 2025-08-20T15:05:47Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"text-generation",
"license:mit",
"region:us"
] |
text-generation
| 2025-08-20T15:05:44Z |
---
license: mit
pipeline_tag: text-generation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
Arko007/fashion-ai-lora-v2
|
Arko007
| 2025-08-20T15:05:43Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"lora",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-08-20T12:51:01Z |
---
base_model: runwayml/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
instance_prompt: a photo of a sks fashion item
tags:
- text-to-image
- diffusers
- lora
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA DreamBooth - Arko007/fashion-ai-lora-v2
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of a sks fashion item using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
unitova/blockassist-bc-zealous_sneaky_raven_1755700571
|
unitova
| 2025-08-20T15:04:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T15:04:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755700428
|
katanyasekolah
| 2025-08-20T15:02:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T15:02:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
finalform/foamMistralV0.3-7B-Instruct
|
finalform
| 2025-08-20T15:02:22Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"region:us"
] |
text-generation
| 2025-08-20T14:57:42Z |
---
base_model: mistralai/Mistral-7B-Instruct-v0.3
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
koloni/blockassist-bc-deadly_graceful_stingray_1755700488
|
koloni
| 2025-08-20T15:02:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T15:02:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ftfyhh/wan2.1_14B_marat_safin_style_lora
|
Ftfyhh
| 2025-08-20T15:01:25Z | 0 | 0 | null |
[
"base_model:Wan-AI/Wan2.1-T2V-14B",
"base_model:finetune:Wan-AI/Wan2.1-T2V-14B",
"region:us"
] | null | 2025-08-20T14:49:32Z |
---
base_model:
- Wan-AI/Wan2.1-T2V-14B
---
Made for Wan2.1-T2V-14B. But I recommend to use with Wan2.2 with a single LOW noise sampler. Don't use 2 samplers (high+low), it will result in wrong composition.
Recommeneded settings for t2i: LOW noise checkpoint, 1280x720 (horizontal orientaion is preferable), 6 steps, lightx2v lora 0.50, fusionx lora 0.50, Sampler: res_2s, scheduler: bong_tangent
lora strength: 1.00-1.50 (i like 1.50)
trigger word: Marat Safin style
Example prompt: `Marat Safin style. Woman is sitting on the floor by a bathub at bathroom in a modern apartment in a crop top and denim shorts. At first she is looking at viewer then looking down. upper body shot, side view.`
|
AnonymousCS/xlmr_immigration_combo14_1
|
AnonymousCS
| 2025-08-20T15:00:55Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T14:19:03Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_immigration_combo14_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo14_1
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2406
- Accuracy: 0.9357
- 1-f1: 0.9008
- 1-recall: 0.8764
- 1-precision: 0.9265
- Balanced Acc: 0.9209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.2472 | 1.0 | 25 | 0.1992 | 0.9422 | 0.9105 | 0.8842 | 0.9385 | 0.9276 |
| 0.1253 | 2.0 | 50 | 0.2118 | 0.9383 | 0.9048 | 0.8803 | 0.9306 | 0.9238 |
| 0.1745 | 3.0 | 75 | 0.2406 | 0.9357 | 0.9008 | 0.8764 | 0.9265 | 0.9209 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Trelis/Qwen3-4B_ds-arc-agi-2-perfect-50-c485
|
Trelis
| 2025-08-20T15:00:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T14:59:11Z |
---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Trelis
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755700402
|
lisaozill03
| 2025-08-20T14:58:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:58:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755700304
|
quantumxnode
| 2025-08-20T14:58:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:58:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
graelo/Magistral-Small-2507-6bits
|
graelo
| 2025-08-20T14:56:37Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"mistral",
"vllm",
"mistral-common",
"text-generation",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:mistralai/Magistral-Small-2507",
"base_model:quantized:mistralai/Magistral-Small-2507",
"license:apache-2.0",
"6-bit",
"region:us"
] |
text-generation
| 2025-08-20T14:54:33Z |
---
base_model: mistralai/Magistral-Small-2507
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
library_name: mlx
license: apache-2.0
inference: false
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
tags:
- vllm
- mistral-common
- mlx
pipeline_tag: text-generation
---
# graelo/Magistral-Small-2507-6bits
This model [graelo/Magistral-Small-2507-6bits](https://huggingface.co/graelo/Magistral-Small-2507-6bits) was
converted to MLX format from [mistralai/Magistral-Small-2507](https://huggingface.co/mistralai/Magistral-Small-2507)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("graelo/Magistral-Small-2507-6bits")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
roeker/blockassist-bc-quick_wiry_owl_1755701646
|
roeker
| 2025-08-20T14:55:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:54:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
4everStudent/sft-mat-qwen3-4B-081925-merged
|
4everStudent
| 2025-08-20T14:54:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T14:53:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/MathTutor-7B-H_v0.0.1-GGUF
|
mradermacher
| 2025-08-20T14:54:13Z | 48 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Sandesh2027/MathTutor-7B-H_v0.0.1",
"base_model:quantized:Sandesh2027/MathTutor-7B-H_v0.0.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-07-10T00:02:23Z |
---
base_model: Sandesh2027/MathTutor-7B-H_v0.0.1
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Sandesh2027/MathTutor-7B-H_v0.0.1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#MathTutor-7B-H_v0.0.1-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-H_v0.0.1-GGUF/resolve/main/MathTutor-7B-H_v0.0.1.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-H_v0.0.1-GGUF/resolve/main/MathTutor-7B-H_v0.0.1.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-H_v0.0.1-GGUF/resolve/main/MathTutor-7B-H_v0.0.1.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-H_v0.0.1-GGUF/resolve/main/MathTutor-7B-H_v0.0.1.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-H_v0.0.1-GGUF/resolve/main/MathTutor-7B-H_v0.0.1.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-H_v0.0.1-GGUF/resolve/main/MathTutor-7B-H_v0.0.1.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-H_v0.0.1-GGUF/resolve/main/MathTutor-7B-H_v0.0.1.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-H_v0.0.1-GGUF/resolve/main/MathTutor-7B-H_v0.0.1.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-H_v0.0.1-GGUF/resolve/main/MathTutor-7B-H_v0.0.1.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-H_v0.0.1-GGUF/resolve/main/MathTutor-7B-H_v0.0.1.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-H_v0.0.1-GGUF/resolve/main/MathTutor-7B-H_v0.0.1.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MathTutor-7B-H_v0.0.1-GGUF/resolve/main/MathTutor-7B-H_v0.0.1.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
swagdowdle/testqwents
|
swagdowdle
| 2025-08-20T14:53:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3_moe",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T14:53:02Z |
---
base_model: unsloth/qwen3-30b-a3b-instruct-2507
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3_moe
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** swagdowdle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-30b-a3b-instruct-2507
This qwen3_moe model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755701545
|
Vasya777
| 2025-08-20T14:53:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:53:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1755701430
|
liukevin666
| 2025-08-20T14:53:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:51:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755699960
|
thanobidex
| 2025-08-20T14:52:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:52:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chainway9/blockassist-bc-untamed_quick_eel_1755699778
|
chainway9
| 2025-08-20T14:51:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:51:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tensorblock/CompassioninMachineLearning_alpacallama_plus1k_80_20mix-GGUF
|
tensorblock
| 2025-08-20T14:51:22Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"TensorBlock",
"GGUF",
"en",
"base_model:CompassioninMachineLearning/alpacallama_plus1k_80_20mix",
"base_model:quantized:CompassioninMachineLearning/alpacallama_plus1k_80_20mix",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-20T13:23:55Z |
---
base_model: CompassioninMachineLearning/alpacallama_plus1k_80_20mix
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- TensorBlock
- GGUF
license: apache-2.0
language:
- en
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## CompassioninMachineLearning/alpacallama_plus1k_80_20mix - GGUF
<div style="text-align: left; margin: 20px 0;">
<a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Join our Discord to learn more about what we're building ↗
</a>
</div>
This repo contains GGUF format model files for [CompassioninMachineLearning/alpacallama_plus1k_80_20mix](https://huggingface.co/CompassioninMachineLearning/alpacallama_plus1k_80_20mix).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th colspan="2" style="font-size: 25px;">Forge</th>
</tr>
<tr>
<th colspan="2">
<img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
</th>
</tr>
<tr>
<th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
</tr>
<tr>
<th colspan="2">
<a href="https://github.com/TensorBlock/forge" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">🚀 Try it now! 🚀</a>
</th>
</tr>
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">👀 See what we built 👀</a>
</th>
</tr>
</table>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Cutting Knowledge Date: December 2023
Today Date: 26 Jul 2024
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [alpacallama_plus1k_80_20mix-Q2_K.gguf](https://huggingface.co/tensorblock/CompassioninMachineLearning_alpacallama_plus1k_80_20mix-GGUF/blob/main/alpacallama_plus1k_80_20mix-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
| [alpacallama_plus1k_80_20mix-Q3_K_S.gguf](https://huggingface.co/tensorblock/CompassioninMachineLearning_alpacallama_plus1k_80_20mix-GGUF/blob/main/alpacallama_plus1k_80_20mix-Q3_K_S.gguf) | Q3_K_S | 3.665 GB | very small, high quality loss |
| [alpacallama_plus1k_80_20mix-Q3_K_M.gguf](https://huggingface.co/tensorblock/CompassioninMachineLearning_alpacallama_plus1k_80_20mix-GGUF/blob/main/alpacallama_plus1k_80_20mix-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
| [alpacallama_plus1k_80_20mix-Q3_K_L.gguf](https://huggingface.co/tensorblock/CompassioninMachineLearning_alpacallama_plus1k_80_20mix-GGUF/blob/main/alpacallama_plus1k_80_20mix-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
| [alpacallama_plus1k_80_20mix-Q4_0.gguf](https://huggingface.co/tensorblock/CompassioninMachineLearning_alpacallama_plus1k_80_20mix-GGUF/blob/main/alpacallama_plus1k_80_20mix-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [alpacallama_plus1k_80_20mix-Q4_K_S.gguf](https://huggingface.co/tensorblock/CompassioninMachineLearning_alpacallama_plus1k_80_20mix-GGUF/blob/main/alpacallama_plus1k_80_20mix-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
| [alpacallama_plus1k_80_20mix-Q4_K_M.gguf](https://huggingface.co/tensorblock/CompassioninMachineLearning_alpacallama_plus1k_80_20mix-GGUF/blob/main/alpacallama_plus1k_80_20mix-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
| [alpacallama_plus1k_80_20mix-Q5_0.gguf](https://huggingface.co/tensorblock/CompassioninMachineLearning_alpacallama_plus1k_80_20mix-GGUF/blob/main/alpacallama_plus1k_80_20mix-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [alpacallama_plus1k_80_20mix-Q5_K_S.gguf](https://huggingface.co/tensorblock/CompassioninMachineLearning_alpacallama_plus1k_80_20mix-GGUF/blob/main/alpacallama_plus1k_80_20mix-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
| [alpacallama_plus1k_80_20mix-Q5_K_M.gguf](https://huggingface.co/tensorblock/CompassioninMachineLearning_alpacallama_plus1k_80_20mix-GGUF/blob/main/alpacallama_plus1k_80_20mix-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
| [alpacallama_plus1k_80_20mix-Q6_K.gguf](https://huggingface.co/tensorblock/CompassioninMachineLearning_alpacallama_plus1k_80_20mix-GGUF/blob/main/alpacallama_plus1k_80_20mix-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
| [alpacallama_plus1k_80_20mix-Q8_0.gguf](https://huggingface.co/tensorblock/CompassioninMachineLearning_alpacallama_plus1k_80_20mix-GGUF/blob/main/alpacallama_plus1k_80_20mix-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/CompassioninMachineLearning_alpacallama_plus1k_80_20mix-GGUF --include "alpacallama_plus1k_80_20mix-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/CompassioninMachineLearning_alpacallama_plus1k_80_20mix-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
luckycanucky/llama3-unaligned
|
luckycanucky
| 2025-08-20T14:51:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"unsloth",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-20T13:08:58Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
roeker/blockassist-bc-quick_wiry_owl_1755701220
|
roeker
| 2025-08-20T14:48:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:47:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755699680
|
mang3dd
| 2025-08-20T14:47:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:47:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
abishekcodes/bert-new-ner
|
abishekcodes
| 2025-08-20T14:47:04Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-08-19T19:42:53Z |
---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: bert-new-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-new-ner
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0246
- Precision: 0.9645
- Recall: 0.9682
- F1: 0.9664
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.0227 | 1.0 | 1002 | 0.0263 | 0.9540 | 0.9614 | 0.9577 |
| 0.0125 | 2.0 | 2004 | 0.0237 | 0.9554 | 0.9720 | 0.9637 |
| 0.0064 | 3.0 | 3006 | 0.0246 | 0.9645 | 0.9682 | 0.9664 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.4
|
mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF
|
mradermacher
| 2025-08-20T14:45:45Z | 518 | 1 |
transformers
|
[
"transformers",
"gguf",
"Thinking: Disabled",
"Forge",
"code",
"mot",
"stem",
"coder",
"trl",
"en",
"zh",
"dataset:prithivMLmods/Open-Omega-Forge-1M",
"base_model:prithivMLmods/Omega-Qwen2.5-Coder-3B",
"base_model:quantized:prithivMLmods/Omega-Qwen2.5-Coder-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-07-16T07:35:38Z |
---
base_model: prithivMLmods/Omega-Qwen2.5-Coder-3B
datasets:
- prithivMLmods/Open-Omega-Forge-1M
language:
- en
- zh
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- 'Thinking: Disabled'
- Forge
- code
- mot
- stem
- coder
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/prithivMLmods/Omega-Qwen2.5-Coder-3B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Omega-Qwen2.5-Coder-3B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-IQ2_M.gguf) | i1-IQ2_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-Q2_K.gguf) | i1-Q2_K | 1.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-IQ3_M.gguf) | i1-IQ3_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-Q4_0.gguf) | i1-Q4_0 | 1.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-Q4_1.gguf) | i1-Q4_1 | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF/resolve/main/Omega-Qwen2.5-Coder-3B.i1-Q6_K.gguf) | i1-Q6_K | 2.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
dileepsathyan/my_awesome_qa_model
|
dileepsathyan
| 2025-08-20T14:42:14Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-08-20T14:31:08Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.3540 |
| 2.6485 | 2.0 | 500 | 1.7377 |
| 2.6485 | 3.0 | 750 | 1.7156 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
forkkyty/blockassist-bc-skilled_arctic_lion_1755700890
|
forkkyty
| 2025-08-20T14:41:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"skilled arctic lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:41:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- skilled arctic lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755699230
|
kojeklollipop
| 2025-08-20T14:40:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:40:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aputze/Whispr
|
aputze
| 2025-08-20T14:40:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T14:15:11Z |
---
title: Whispr
emoji: 🎤
colorFrom: blue
colorTo: indigo
sdk: gradio
sdk_version: 5.43.1
app_file: app.py
pinned: false
---
# Whispr - Audio Transcription
Audio transcription using OpenAI's Whisper model through faster-whisper.
## Features
- Audio file upload and microphone recording
- Multiple model sizes (tiny to large)
- Optimized for Hebrew speech
- Real-time transcription with progress indicators
|
loyal-misc/myst
|
loyal-misc
| 2025-08-20T14:36:53Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:LyliaEngine/Pony_Diffusion_V6_XL",
"base_model:adapter:LyliaEngine/Pony_Diffusion_V6_XL",
"license:unlicense",
"region:us"
] |
text-to-image
| 2025-08-20T12:10:35Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/myst.png
text: '-'
base_model: LyliaEngine/Pony_Diffusion_V6_XL
instance_prompt: myst, scalie, female
license: unlicense
---
# myst
<Gallery />
## Trigger words
You should use `myst` to trigger the image generation.
You should use `scalie` to trigger the image generation.
You should use `female` to trigger the image generation.
## Download model
[Download](/loyal-misc/myst/tree/main) them in the Files & versions tab.
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755698685
|
coelacanthxyz
| 2025-08-20T14:35:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:35:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Qwen3_Medical_GRPO-GGUF
|
mradermacher
| 2025-08-20T14:35:05Z | 352 | 1 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"medical",
"en",
"zh",
"dataset:FreedomIntelligence/medical-o1-reasoning-SFT",
"dataset:lastmass/medical-o1-reasoning-SFT-keywords",
"base_model:lastmass/Qwen3_Medical_GRPO",
"base_model:quantized:lastmass/Qwen3_Medical_GRPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-07-24T15:58:10Z |
---
base_model: lastmass/Qwen3_Medical_GRPO
datasets:
- FreedomIntelligence/medical-o1-reasoning-SFT
- lastmass/medical-o1-reasoning-SFT-keywords
language:
- en
- zh
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- medical
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/lastmass/Qwen3_Medical_GRPO
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3_Medical_GRPO-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3_Medical_GRPO-GGUF/resolve/main/Qwen3_Medical_GRPO.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
joanna302/Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_0.0002
|
joanna302
| 2025-08-20T14:34:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"conversational",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T09:24:38Z |
---
base_model: unsloth/Qwen3-8B-Base
library_name: transformers
model_name: Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_0.0002
tags:
- generated_from_trainer
- sft
- unsloth
- trl
licence: license
---
# Model Card for Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_0.0002
This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_0.0002", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_pag_alpaca_0.33_part_SFT_0.0002/runs/l27wsth5)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
joanna302/Qwen3-8B-Base_pag_alpaca_1_part_SFT_2e-05
|
joanna302
| 2025-08-20T14:32:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"conversational",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T11:53:43Z |
---
base_model: unsloth/Qwen3-8B-Base
library_name: transformers
model_name: Qwen3-8B-Base_pag_alpaca_1_part_SFT_2e-05
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for Qwen3-8B-Base_pag_alpaca_1_part_SFT_2e-05
This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_pag_alpaca_1_part_SFT_2e-05", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_pag_alpaca_1_part_SFT_2e-05/runs/pjfkh85c)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755700300
|
lilTAT
| 2025-08-20T14:32:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:32:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
annasoli/Qwen2.5-14B_SVt_l24_lr2e-4_a256_2E_technical-vehicles_KL8_1e6
|
annasoli
| 2025-08-20T14:32:27Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T14:32:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
youuotty/blockassist-bc-raging_hardy_octopus_1755700332
|
youuotty
| 2025-08-20T14:32:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging hardy octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:32:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging hardy octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sdagsadgd/blockassist-bc-sedate_squeaky_salamander_1755696899
|
sdagsadgd
| 2025-08-20T14:29:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sedate squeaky salamander",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:28:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sedate squeaky salamander
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
razor534/blockassist-bc-lazy_extinct_termite_1755700056
|
razor534
| 2025-08-20T14:28:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lazy extinct termite",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:28:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lazy extinct termite
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755700004
|
yaelahnal
| 2025-08-20T14:27:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:27:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lemonhat/Qwen2.5-Coder-7B-Instruct-airline_2k_v1_tag5_progress
|
lemonhat
| 2025-08-20T14:27:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T14:26:06Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: airline_2k_v1_tag5_progress
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# airline_2k_v1_tag5_progress
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the airline_2k_v1_tag5_progress dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4934
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
hawierdev/shellm-v0.1
|
hawierdev
| 2025-08-20T14:27:16Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-08-20T08:26:54Z |
# ShellM v0.1
ShellM is a specialized language model based on Qwen2.5 1.5B, fine-tuned for generating intelligent terminal command suggestions. The model has been trained to understand terminal context and suggest appropriate commands.
## Model Specifications
- **Architecture**: Qwen2ForCausalLM
- **Size**: 1.5B parameters
- **Hidden size**: 1536
- **Layers**: 28
- **Attention heads**: 12 (2 key-value heads)
- **Max context length**: 32768 tokens
- **Precision**: bfloat16
## Capabilities
- Terminal command suggestions based on context
- Understanding complex system tasks
- Shell script generation
- Command debugging assistance
- Multi-platform support
## Usage
The model uses chatml format for conversations:
```
<|im_start|>user
How to display all processes using port 8080?
<|im_end|>
<|im_start|>assistant
lsof -i :8080
<|im_end|>
```
### Example Use Cases
**File Management:**
```
User: How to find all Python files larger than 1MB?
Assistant: find . -name "*.py" -size +1M
```
**Process Management:**
```
User: How to kill all python processes?
Assistant: pkill python
```
**Git Operations:**
```
User: How to undo last commit without losing changes?
Assistant: git reset --soft HEAD~1
```
## Installation and Usage
Requirements:
- transformers
- torch
- tokenizers
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "shellm_v0.1_merged"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "<|im_start|>user\nHow to check disk usage?<|im_end|>\n<|im_start|>assistant\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=150, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=False)
```
## Special Tokens
The model supports standard Qwen2 tokens:
- `<|im_start|>`, `<|im_end|>` - conversation markers
- `<|vision_pad|>` - padding token
- Fill-in-the-middle tokens: `<|fim_prefix|>`, `<|fim_middle|>`, `<|fim_suffix|>`
## Version Info
Version: v0.1
Based on: Qwen2.5-1.5B
Fine-tuned with: Unsloth v2025.8.8
|
Savoxism/multilingual-e5-small-finetuned-stage2
|
Savoxism
| 2025-08-20T14:26:37Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:170319",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:Savoxism/multilingual-e5-small-finetuned-stage1",
"base_model:finetune:Savoxism/multilingual-e5-small-finetuned-stage1",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-20T14:26:17Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:170319
- loss:MultipleNegativesRankingLoss
base_model: Savoxism/multilingual-e5-small-finetuned-stage1
widget:
- source_sentence: 'query: Phẫu thuật vết thương khớp sẽ có quy trình tiến hành như
thế nào?'
sentences:
- 'passage: phẫu thuật vết thương khớp iv chuẩn bị 1 người thực hiện: 03phẫu thuật
viên chuyên khoa chấn thương chỉnh hình 2 người bệnh và gia đình: - chuẩn bị tâm
lý cần được giải thích trước mổ về quá trình phẫu thuật hậu phẫu và tập phục hồi
chức năng sau mổ chuẩn bị hồ sơ bệnh án đầy đủ thủ tục hành chính và các xét nghiệm
cần thiết - chuẩn bị người bệnh trước mổ: nhịn ăn thụt tháo vệ sinh vùng mổ kháng
sinh dự phòng 3 phương tiện trang thiết bị: - bộ dụng cụ mổ chấn thương chi -
thực hiện tại các cơ sở có chuyên khoa chấn thương chỉnh hình 4 dự kiến thời gian
tiến hành: 60 phút v các bước tiến hành 1 tư thế: người bệnh nằm ngửa hoặc nghiêng
tùy theo vùng khớp cần phẫu thuật 2 vô cảm - kháng sinh dự phòng - vô cảm người
bệnh bằng gây tê tủy sống hoặc gây mê 3 kỹ thuật: - sát khuẩn vùng mổ bằng dung
dịch betadine - dùng garo hơi (nếu có thể) trong mổ với áp lực bằng hai lần áp
lực động mạch tối đa - cắt lọc rạch rộng mép da vết thương vùng khớp - mở bao
khớp để vào bộc lộ vùng mặt khớp - bơm rửa làm sạch khớp bằng dung dịch huyết
thanh vô khuẩn - cầm máu đặt dẫn lưu ngoại khớp - đóng cân và phần mềm theo các
lớp giải phẫu - đóng da một lớp da thưa - cố định bột tùy theo thương tổn (nẹp
bột hoặc bột rạch dọc)'
- 'passage: căn cứ và thời hạn kháng nghị 1 bản án quyết định sơ thẩm chưa có hiệu
lực pháp luật bị kháng nghị theo thủ tục phúc thẩm khi có một trong những căn
cứ sau đây: a) việc điều tra xét hỏi tại phiên tòa sơ thẩm không đầy đủ dẫn đến
đánh giá không đúng tính chất của vụ án; b) kết luận quyết định trong bản án quyết
định sơ thẩm không phù hợp với các tình tiết khách quan của vụ án; c) có sai lầm
trong việc áp dụng các quy định của bộ luật hình sự bộ luật dân sự và các văn
bản pháp luật khác; d) thành phần hội đồng xét xử sơ thẩm không đúng luật định
hoặc có vi phạm nghiêm trọng khác về thủ tục tố tụng 2 thời hạn kháng nghị bản
án quyết định của tòa án theo điều 337 bộ luật tố tụng hình sự'
- 'passage: thủ tục hành chính cấp tỉnh 4 thủ tục quyết định công nhận cơ sở sản
xuất kinh doanh sử dụng từ 30% tổng số lao động trở lên là người khuyết tật -
trình tự thời gian thực hiện: + bước 1: cơ sở sản xuất kinh doanh sử dụng từ 30%
tổng số lao động trở lên là người khuyết tật lập 01 bộ hồ sơ theo quy định gửi
(trực tiếp hoặc qua đường bưu điện) đến sở lao động - thương binh và xã hội nơi
cơ sở có trụ sở chính + bước 2: trong thời hạn 15 ngày làm việc kể từ ngày nhận
đủ hồ sơ theo quy định sở lao động -thương binh và xã hội có trách nhiệm thẩm
định và quyết định công nhận cơ sở sản xuất kinh doanh sử dụng từ 30% tổng số
lao động trở lên là người khuyết tật hoặc có văn bản thông báo lý do không đủ
điều kiện để công nhận cơ sở sản xuất kinh doanh sử dụng từ 30% tổng số lao động
trở lên là người khuyết tật - cách thức thực hiện: nộp hồ sơ trực tiếp hoặc qua
đường bưu điện'
- source_sentence: 'query: Xử lý vi phạm pháp luật về an ninh mạng như thế nào?'
sentences:
- 'passage: hủy tư cách công ty đại chúng 1 công ty đại chúng có trách nhiệm gửi
ủy ban chứng khoán nhà nước văn bản thông báo kèm danh sách cổ đông do tổng công
ty lưu ký và bù trừ chứng khoán việt nam cung cấp trong thời hạn 15 ngày kể từ
ngày có vốn điều lệ đã góp không đủ 30 tỷ đồng tính trên báo cáo tài chính gần
nhất được kiểm toán hoặc có cơ cấu cổ đông không đáp ứng điều kiện quy định tại
điểm a khoản 1 điều 32 của luật này căn cứ theo xác nhận của tổng công ty lưu
ký và bù trừ chứng khoán việt nam 2 sau 01 năm kể từ ngày không còn đáp ứng quy
định tại điểm a khoản 1 điều 32 của luật này mà công ty vẫn không đáp ứng được
điều kiện là công ty đại chúng ủy ban chứng khoán nhà nước xem xét hủy tư cách
công ty đại chúng 3 công ty phải thực hiện đầy đủ các quy định liên quan đến công
ty đại chúng cho đến thời điểm ủy ban chứng khoán nhà nước thông báo hủy tư cách
công ty đại chúng 4 trong thời hạn 07 ngày làm việc kể từ ngày nhận được thông
báo của ủy ban chứng khoán nhà nước về việc hủy tư cách công ty đại chúng công
ty có trách nhiệm thông báo việc hủy tư cách công ty đại chúng trên trang thông
tin điện tử của công ty phương tiện công bố thông tin của ủy ban chứng khoán nhà
nước sở giao dịch chứng khoán việt nam và thực hiện thủ tục hủy niêm yết đăng
ký giao dịch theo quy định của pháp luật 5 bộ trưởng bộ tài chính quy định việc
hủy tư cách công ty đại chúng đối với trường hợp không đáp ứng điều kiện là công
ty đại chúng do tổ chức lại giải thể phá sản doanh nghiệp'
- 'passage: vi phạm về tàng trữ phát hành xuất bản phẩm 7 hình thức xử phạt bổ sung:
tước quyền sử dụng giấy phép hoạt động kinh doanh nhập khẩu xuất bản phẩm hoặc
đình chỉ hoạt động từ 01 đến 03 tháng đối với hành vi vi phạm quy định tại khoản
6 điều này 8 biện pháp khắc phục hậu quả: a) buộc thu hồi xuất bản phẩm đối với
hành vi vi phạm quy định tại điểm b khoản 1 điều này; b) buộc tiêu hủy xuất bản
phẩm đối với hành vi vi phạm quy định tại điểm a và điểm c khoản 1; điểm a và
điểm d khoản 2; điểm a và điểm c khoản 3; các điểm a b c e và g khoản 4; các điểm
a b c và đ khoản 5; khoản 6 điều này; c) buộc nộp lại số lợi bất hợp pháp có được
do thực hiện hành vi vi phạm hành chính đối với hành vi quy định tại các điểm
a và b khoản 1; điểm a khoản 2; điểm a khoản 3; các điểm a b và c khoản 4; các
điểm a b và c khoản 5; khoản 6 điều này'
- 'passage: xử lý vi phạm pháp luật về thực hiện dân chủ ở cơ sở 1 cá nhân có hành
vi vi phạm pháp luật về thực hiện dân chủ ở cơ sở thì tùy theo tính chất mức độ
vi phạm mà bị xử phạt vi phạm hành chính áp dụng biện pháp xử lý hành chính hoặc
bị truy cứu trách nhiệm hình sự; nếu gây thiệt hại thì phải bồi thường theo quy
định của pháp luật 2 tổ chức vi phạm quy định của luật này và quy định khác của
pháp luật có liên quan đến thực hiện dân chủ ở cơ sở thì tùy theo tính chất mức
độ vi phạm mà bị xử phạt vi phạm hành chính; nếu gây thiệt hại thì phải bồi thường
theo quy định của pháp luật 3 cán bộ công chức viên chức lợi dụng chức vụ quyền
hạn vi phạm quy định của luật này xâm phạm lợi ích của nhà nước quyền và lợi ích
hợp pháp của tổ chức cá nhân thì tùy theo tính chất mức độ vi phạm mà bị xử lý
kỷ luật hoặc bị truy cứu trách nhiệm hình sự; nếu gây thiệt hại thì phải bồi thường
bồi hoàn theo quy định của pháp luật 4 việc xử phạt vi phạm hành chính xử lý kỷ
luật đối với các hành vi vi phạm pháp luật về thực hiện dân chủ ở cơ sở thực hiện
theo quy định của chính phủ'
- source_sentence: 'query: Sử dụng dữ liệu cá nhân của trẻ từ 7 tuổi mà không được
sự đồng ý sẽ bị xử lý như thế nào?'
sentences:
- 'passage: 1 cơ sở khám bệnh chữa bệnh điều trị dưới 1000 người bệnh đột quỵ trong
một năm thì thành lập khoa đột quỵ quy mô giường bệnh của khoa đột quỵ là dưới
50 giường bệnh 2 nhân lực: theo quy định tại khoản 2 điều 10 của thông tư này
và theo các quy định hiện hành về cơ cấu tổ chức và hoạt động của khoa lâm sàng
3 trang thiết bị thiết yếu: a) có đủ trang thiết bị thiết yếu theo danh mục trang
thiết bị quy định tại phụ lục 02 ban hành kèm theo thông tư này b) cơ số các trang
thiết bị thiết yếu do người đứng đầu cơ sở khám bệnh chữa bệnh quyết định dựa
trên quy mô giường bệnh và nhu cầu khám bệnh chữa bệnh'
- 'passage: “điều 45 điều kiện hưởng chế độ tai nạn lao động người lao động tham
gia bảo hiểm tai nạn lao động bệnh nghề nghiệp được hưởng chế độ tai nạn lao động
khi có đủ các điều kiện sau đây: 1 bị tai nạn thuộc một trong các trường hợp sau
đây: a) tại nơi làm việc và trong giờ làm việc kể cả khi đang thực hiện các nhu
cầu sinh hoạt cần thiết tại nơi làm việc hoặc trong giờ làm việc mà bộ luật lao
động và nội quy của cơ sở sản xuất kinh doanh cho phép bao gồm nghỉ giải lao ăn
giữa ca ăn bồi dưỡng hiện vật làm vệ sinh kinh nguyệt tắm rửa cho con bú đi vệ
sinh; b) ngoài nơi làm việc hoặc ngoài giờ làm việc khi thực hiện công việc theo
yêu cầu của người sử dụng lao động hoặc người được người sử dụng lao động ủy quyền
bằng văn bản trực tiếp quản lý lao động; c) trên tuyến đường đi từ nơi ở đến nơi
làm việc hoặc từ nơi làm việc về nơi ở trong khoảng thời gian và tuyến đường hợp
lý; 2 suy giảm khả năng lao động từ 5% trở lên do bị tai nạn quy định tại khoản
1 điều này; 3 người lao động không được hưởng chế độ do quỹ bảo hiểm tai nạn lao
động bệnh nghề nghiệp chi trả nếu thuộc một trong các nguyên nhân quy định tại
khoản 1 điều 40 của luật này ”'
- 'passage: xử lý vi phạm quy định bảo vệ dữ liệu cá nhân cơ quan tổ chức cá nhân
vi phạm quy định bảo vệ dữ liệu cá nhân tùy theo mức độ có thể bị xử lý kỷ luật
xử phạt vi phạm hành chính xử lý hình sự theo quy định'
- source_sentence: 'query: Cơ sở giáo dục có vốn đầu tư nước ngoài có hành vi gian
lận để được thành lập thì có bị đình chỉ hoạt động giáo dục không?'
sentences:
- 'passage: đình chỉ hoạt động đào tạo của cơ sở giáo dục đại học 1 cơ sở giáo dục
đại học bị đình chỉ hoạt động đào tạo trong những trường hợp sau đây: a) có hành
vi gian lận để được thành lập hoặc cho phép thành lập cho phép hoạt động đào tạo;
b) không bảo đảm một trong các điều kiện quy định tại khoản 1 điều 23 của luật
này; c) người cho phép hoạt động đào tạo không đúng thẩm quyền; d) vi phạm quy
định của pháp luật về giáo dục bị xử phạt vi phạm hành chính ở mức độ phải đình
chỉ hoạt động; đ) các trường hợp khác theo quy định của pháp luật 2 quyết định
đình chỉ hoạt động đào tạo phải xác định rõ lý do đình chỉ thời hạn đình chỉ biện
pháp bảo đảm lợi ích hợp pháp của giảng viên người lao động và người học quyết
định đình chỉ hoạt động đào tạo được công bố công khai trên các phương tiện thông
tin đại chúng 3 sau thời hạn đình chỉ nếu nguyên nhân dẫn đến việc đình chỉ được
khắc phục thì người có thẩm quyền quyết định đình chỉ ra quyết định cho phép tiếp
tục hoạt động đào tạo'
- 'passage: nhiệm vụ quyền hạn của thanh tra bộ 1 thực hiện nhiệm vụ quyền hạn quy
định tại điều 18 luật thanh tra 2 hướng dẫn kiểm tra đôn đốc công an các đơn vị
địa phương xây dựng và thực hiện chương trình kế hoạch thanh tra 3 tổ chức tập
huấn nghiệp vụ thanh tra cho thủ trưởng thanh tra viên cán bộ thanh tra chuyên
trách hoặc kiêm nhiệm trong công an nhân dân 4 phổ biến tuyên truyền hướng dẫn
đôn đốc kiểm tra thanh tra công an các đơn vị địa phương thực hiện các quy định
của pháp luật về thanh tra 5 tổng kết rút kinh nghiệm trao đổi thông tin và nghiên
cứu khoa học về công tác thanh tra trong phạm vi quản lý nhà nước của bộ công
an'
- 'passage: khai thác rừng trái pháp luật 8 hình thức xử phạt bổ sung: a) tịch thu
tang vật đối với hành vi quy định tại khoản 1 khoản 2 khoản 3 khoản 4 khoản 5
và khoản 6 điều này; b) tịch thu phương tiện giao thông thô sơ đường bộ và các
dụng cụ công cụ được sử dụng để thực hiện các hành vi quy định tại khoản 1 khoản
2 khoản 3 khoản 4 khoản 5 và khoản 6 điều này;'
- source_sentence: 'query: Thủ tục bổ sung thông tin Giấy xác nhận đủ điều kiện làm
tổng đại lý kinh doanh xăng dầu tại Sở Công thương được thực hiện theo trình tự
nào?'
sentences:
- 'passage: quyền hạn của liên đoàn 1 tuyên truyền tôn chỉ mục đích hoạt động của
liên đoàn 2 đại diện cho hội viên trong mối quan hệ đối nội đối ngoại có liên
quan đến chức năng nhiệm vụ của liên đoàn theo quy định của pháp luật 3 tổ chức
phối hợp hoạt động giữa các hội viên vì lợi ích chung của liên đoàn; hòa giải
tranh chấp trong nội bộ liên đoàn 4 tham gia tổ chức đào tạo bồi dưỡng huấn luyện
chuyên môn cho huấn luyện viên trọng tài cán bộ quản lý và được cấp chứng chỉ
theo quy định của pháp luật quản lý về mặt chuyên môn đối với các đối tượng này
trong quá trình tham gia các hoạt động do liên đoàn tổ chức 5 tư vấn phản biện
các vấn đề thuộc phạm vi hoạt động của liên đoàn theo đề nghị của cơ quan quản
lý nhà nước phù hợp với quy định của pháp luật 6 tham gia ý kiến vào các văn bản
quy phạm pháp luật có liên quan đến nội dung hoạt động của liên đoàn theo quy
định của pháp luật kiến nghị với cơ quan nhà nước có thẩm quyền đối với các vấn
đề liên quan tới sự phát triển của liên đoàn và lĩnh vực liên đoàn hoạt động 7
phối hợp với các cơ quan tổ chức có liên quan để thực hiện nhiệm vụ của liên đoàn
đúng hướng và có hiệu quả 8 được gây quỹ liên đoàn trên cơ sở hội phí của hội
viên và các nguồn thu từ hoạt động kinh doanh dịch vụ theo quy định của pháp luật;
được nhà nước hỗ trợ và cấp kinh phí cho các hoạt động gắn với nhiệm vụ của nhà
nước giao theo quy định của pháp luật 9 được nhận các nguồn tài trợ ủng hộ hợp
pháp của các tổ chức cá nhân trong và ngoài nước; quản lý và sử dụng các nguồn
tài trợ ủng hộ này theo quy định của pháp luật 10 được gia nhập làm hội viên của
các liên đoàn hiệp hội quốc tế và khu vực tham gia ký kết và thực hiện thỏa thuận
quốc tế theo quy định của pháp luật'
- 'passage: thẩm quyền hồ sơ trình tự cấp giấy xác nhận đủ điều kiện làm thương
nhân phân phối xăng dầu 3 trình tự cấp giấy xác nhận đủ điều kiện làm thương nhân
phân phối xăng dầu a) thương nhân gửi một (01) bộ hồ sơ về bộ công thương b) trường
hợp chưa đủ hồ sơ hợp lệ trong vòng bảy (07) ngày làm việc kể từ ngày tiếp nhận
hồ sơ của thương nhân bộ công thương có văn bản yêu cầu thương nhân bổ sung c)
trong thời hạn ba mươi (30) ngày làm việc kể từ khi nhận được hồ sơ hợp lệ bộ
công thương có trách nhiệm xem xét thẩm định và cấp giấy xác nhận đủ điều kiện
làm thương nhân phân phối xăng dầu theo mẫu số 6 tại phụ lục kèm theo nghị định
này cho thương nhân trường hợp từ chối cấp giấy xác nhận do không đủ điều kiện
bộ công thương phải trả lời bằng văn bản và nêu rõ lý do 4 giấy xác nhận đủ điều
kiện làm thương nhân phân phối xăng dầu có thời hạn hiệu lực là năm (05) năm kể
từ ngày cấp mới 5 thương nhân được cấp giấy xác nhận đủ điều kiện làm thương nhân
phân phối xăng dầu phải nộp phí và lệ phí theo quy định của bộ tài chính 6 bộ
công thương có thẩm quyền thu hồi giấy xác nhận đủ điều kiện làm thương nhân phân
phối xăng dầu giấy xác nhận đủ điều kiện làm thương nhân phân phối xăng dầu bị
thu hồi trong các trường hợp: thương nhân không tiếp tục làm thương nhân phân
phối xăng dầu; thương nhân không hoạt động kinh doanh xăng dầu trong thời gian
một (01) tháng trở lên; thương nhân bị phá sản theo quy định của pháp luật; thương
nhân không đáp ứng một trong các điều kiện làm thương nhân phân phối xăng dầu
theo quy định tại điều 13 nghị định này; thương nhân vi phạm nhiều lần hoặc tái
phạm quy định về bảo đảm số lượng chất lượng xăng dầu lưu thông trên thị trường
vi phạm quy định về tăng giảm giá bán xăng dầu tại nghị định này và các trường
hợp khác theo quy định của pháp luật'
- 'passage: cơ cấu tổ chức của khoa đột quỵ khoa đột quỵ được tổ chức các bộ phận
chuyên môn như quy định tại khoản 2 điều 9 thông tư này tùy theo điều kiện của
cơ sở khám bệnh chữa bệnh và yêu cầu của hoạt động khám bệnh chữa bệnh đột quỵ
khoa đột quỵ có thể tổ chức thêm các bộ phận khác'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on Savoxism/multilingual-e5-small-finetuned-stage1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Savoxism/multilingual-e5-small-finetuned-stage1](https://huggingface.co/Savoxism/multilingual-e5-small-finetuned-stage1). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Savoxism/multilingual-e5-small-finetuned-stage1](https://huggingface.co/Savoxism/multilingual-e5-small-finetuned-stage1) <!-- at revision 783095264d51a3d681a97e81edeb90f524968d5d -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'query: Thủ tục bổ sung thông tin Giấy xác nhận đủ điều kiện làm tổng đại lý kinh doanh xăng dầu tại Sở Công thương được thực hiện theo trình tự nào?',
'passage: thẩm quyền hồ sơ trình tự cấp giấy xác nhận đủ điều kiện làm thương nhân phân phối xăng dầu 3 trình tự cấp giấy xác nhận đủ điều kiện làm thương nhân phân phối xăng dầu a) thương nhân gửi một (01) bộ hồ sơ về bộ công thương b) trường hợp chưa đủ hồ sơ hợp lệ trong vòng bảy (07) ngày làm việc kể từ ngày tiếp nhận hồ sơ của thương nhân bộ công thương có văn bản yêu cầu thương nhân bổ sung c) trong thời hạn ba mươi (30) ngày làm việc kể từ khi nhận được hồ sơ hợp lệ bộ công thương có trách nhiệm xem xét thẩm định và cấp giấy xác nhận đủ điều kiện làm thương nhân phân phối xăng dầu theo mẫu số 6 tại phụ lục kèm theo nghị định này cho thương nhân trường hợp từ chối cấp giấy xác nhận do không đủ điều kiện bộ công thương phải trả lời bằng văn bản và nêu rõ lý do 4 giấy xác nhận đủ điều kiện làm thương nhân phân phối xăng dầu có thời hạn hiệu lực là năm (05) năm kể từ ngày cấp mới 5 thương nhân được cấp giấy xác nhận đủ điều kiện làm thương nhân phân phối xăng dầu phải nộp phí và lệ phí theo quy định của bộ tài chính 6 bộ công thương có thẩm quyền thu hồi giấy xác nhận đủ điều kiện làm thương nhân phân phối xăng dầu giấy xác nhận đủ điều kiện làm thương nhân phân phối xăng dầu bị thu hồi trong các trường hợp: thương nhân không tiếp tục làm thương nhân phân phối xăng dầu; thương nhân không hoạt động kinh doanh xăng dầu trong thời gian một (01) tháng trở lên; thương nhân bị phá sản theo quy định của pháp luật; thương nhân không đáp ứng một trong các điều kiện làm thương nhân phân phối xăng dầu theo quy định tại điều 13 nghị định này; thương nhân vi phạm nhiều lần hoặc tái phạm quy định về bảo đảm số lượng chất lượng xăng dầu lưu thông trên thị trường vi phạm quy định về tăng giảm giá bán xăng dầu tại nghị định này và các trường hợp khác theo quy định của pháp luật',
'passage: quyền hạn của liên đoàn 1 tuyên truyền tôn chỉ mục đích hoạt động của liên đoàn 2 đại diện cho hội viên trong mối quan hệ đối nội đối ngoại có liên quan đến chức năng nhiệm vụ của liên đoàn theo quy định của pháp luật 3 tổ chức phối hợp hoạt động giữa các hội viên vì lợi ích chung của liên đoàn; hòa giải tranh chấp trong nội bộ liên đoàn 4 tham gia tổ chức đào tạo bồi dưỡng huấn luyện chuyên môn cho huấn luyện viên trọng tài cán bộ quản lý và được cấp chứng chỉ theo quy định của pháp luật quản lý về mặt chuyên môn đối với các đối tượng này trong quá trình tham gia các hoạt động do liên đoàn tổ chức 5 tư vấn phản biện các vấn đề thuộc phạm vi hoạt động của liên đoàn theo đề nghị của cơ quan quản lý nhà nước phù hợp với quy định của pháp luật 6 tham gia ý kiến vào các văn bản quy phạm pháp luật có liên quan đến nội dung hoạt động của liên đoàn theo quy định của pháp luật kiến nghị với cơ quan nhà nước có thẩm quyền đối với các vấn đề liên quan tới sự phát triển của liên đoàn và lĩnh vực liên đoàn hoạt động 7 phối hợp với các cơ quan tổ chức có liên quan để thực hiện nhiệm vụ của liên đoàn đúng hướng và có hiệu quả 8 được gây quỹ liên đoàn trên cơ sở hội phí của hội viên và các nguồn thu từ hoạt động kinh doanh dịch vụ theo quy định của pháp luật; được nhà nước hỗ trợ và cấp kinh phí cho các hoạt động gắn với nhiệm vụ của nhà nước giao theo quy định của pháp luật 9 được nhận các nguồn tài trợ ủng hộ hợp pháp của các tổ chức cá nhân trong và ngoài nước; quản lý và sử dụng các nguồn tài trợ ủng hộ này theo quy định của pháp luật 10 được gia nhập làm hội viên của các liên đoàn hiệp hội quốc tế và khu vực tham gia ký kết và thực hiện thỏa thuận quốc tế theo quy định của pháp luật',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 170,319 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------|
| type | string | string | list |
| details | <ul><li>min: 10 tokens</li><li>mean: 27.45 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 245.47 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>size: 6 elements</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>query: Quân nhân dự bị được xếp trong đơn vị dự bị động viên thì phải có trách nhiệm như thế nào?</code> | <code>passage: "điều 4 trách nhiệm của quân nhân dự bị được xếp trong đơn vị dự bị động viên 1 quân nhân dự bị được xếp trong đơn vị dự bị động viên có trách nhiệm sau đây: a) kiểm tra sức khỏe; b) thực hiện lệnh gọi huấn luyện diễn tập kiểm tra sẵn sàng động viên sẵn sàng chiến đấu; c) thực hiện chế độ sinh hoạt đơn vị dự bị động viên và nhiệm vụ do người chỉ huy giao; d) thực hiện lệnh huy động để bổ sung cho lực lượng thường trực của quân đội nhân dân 2 quân nhân dự bị giữ chức vụ chỉ huy đơn vị dự bị động viên có trách nhiệm sau đây: a) thực hiện quy định tại khoản 1 điều này; b) nắm tình hình số lượng chất lượng đơn vị; duy trì đơn vị sinh hoạt theo chế độ và thực hiện chế độ báo cáo; c) quản lý chỉ huy đơn vị khi huấn luyện diễn tập kiểm tra sẵn sàng động viên sẵn sàng chiến đấu; d) quản lý chỉ huy đơn vị để bổ sung cho lực lượng thường trực của quân đội nhân dân "</code> | <code>['passage: "điều 2 giải thích từ ngữ trong luật này các từ ngữ dưới đây được hiểu như sau: 1 lực lượng dự bị động viên bao gồm quân nhân dự bị và phương tiện kỹ thuật dự bị được đăng ký quản lý và sắp xếp vào đơn vị dự bị động viên để sẵn sàng bổ sung cho lực lượng thường trực của quân đội nhân dân 2 quân nhân dự bị bao gồm sĩ quan dự bị quân nhân chuyên nghiệp dự bị và hạ sĩ quan binh sĩ dự bị được đăng ký theo quy định của luật sĩ quan quân đội nhân dân việt nam luật quân nhân chuyên nghiệp công nhân và viên chức quốc phòng luật nghĩa vụ quân sự "', 'passage: “điều 16 thời hạn thanh tra của đoàn thanh tra chuyên ngành 1 thời hạn thực hiện một cuộc thanh tra chuyên ngành được quy định như sau: a) cuộc thanh tra chuyên ngành do thanh tra bộ tổng cục cục thuộc bộ tiến hành không quá 45 ngày; trường hợp phức tạp có thể kéo dài hơn nhưng không quá 70 ngày; b) cuộc thanh tra chuyên ngành do thanh tra sở chi cục thuộc sở tiến hành không quá 30 ngày; trường hợp phức tạp có thể kéo dài hơn nh...</code> |
| <code>query: Quân nhân chuyên nghiệp dự bị và hạ sĩ quan, binh sĩ dự bị sắp xếp vào đơn vị dự bị động viên là bao nhiêu tuổi?</code> | <code>passage: "điều 17 độ tuổi quân nhân dự bị sắp xếp vào đơn vị dự bị động viên trong thời bình 1 độ tuổi sĩ quan dự bị sắp xếp vào đơn vị dự bị động viên thực hiện theo quy định của luật sĩ quan quân đội nhân dân việt nam 2 độ tuổi quân nhân chuyên nghiệp dự bị và hạ sĩ quan binh sĩ dự bị sắp xếp vào đơn vị dự bị động viên được quy định như sau: a) nam quân nhân chuyên nghiệp dự bị không quá 40 tuổi; hạ sĩ quan binh sĩ dự bị không quá 35 tuổi được sắp xếp vào đơn vị chiến đấu; b) nam quân nhân chuyên nghiệp dự bị và hạ sĩ quan binh sĩ dự bị không quá 45 tuổi; nữ quân nhân dự bị không quá 40 tuổi được sắp xếp vào đơn vị bảo đảm chiến đấu "</code> | <code>['passage: "điều 16 sắp xếp quân nhân dự bị vào đơn vị dự bị động viên 1 sắp xếp quân nhân dự bị đủ tiêu chuẩn về sức khỏe có chuyên nghiệp quân sự đúng với chức danh biên chế; gắn địa bàn tuyển quân với địa bàn động viên; trường hợp thiếu thì sắp xếp quân nhân dự bị có chuyên nghiệp quân sự gần đúng với chức danh biên chế 2 sắp xếp quân nhân chuyên nghiệp dự bị hạ sĩ quan binh sĩ dự bị được thực hiện theo thứ tự quân nhân chuyên nghiệp dự bị hạ sĩ quan binh sĩ dự bị hạng một trước trường hợp thiếu thì sắp xếp binh sĩ dự bị hạng hai 3 sắp xếp quân nhân dự bị vào đơn vị dự bị động viên thuộc đơn vị bộ đội chủ lực trước đơn vị bộ đội địa phương sau "', 'passage: "điều 57 mức đóng nguồn hình thành và sử dụng quỹ bảo hiểm thất nghiệp 1 mức đóng và trách nhiệm đóng bảo hiểm thất nghiệp được quy định như sau: a) người lao động đóng bằng 1% tiền lương tháng; b) người sử dụng lao động đóng bằng 1% quỹ tiền lương tháng của những người lao động đang tham gia bảo hiểm thất nghiệp; c) nhà nước hỗ ...</code> |
| <code>query: Văn phòng Bộ Văn hóa Thể thao và Du lịch có con dấu và tài khoản riêng hay không?</code> | <code>passage: vị trí và chức năng văn phòng bộ là tổ chức hành chính thuộc bộ văn hóa thể thao và du lịch có chức năng tham mưu tổng hợp điều phối giúp bộ trưởng tổ chức các hoạt động chung của bộ; theo dõi đôn đốc các tổ chức đơn vị thuộc bộ thực hiện chương trình kế hoạch công tác của bộ; kiểm soát thủ tục hành chính cải cách hành chính tổ chức triển khai thực hiện cơ chế một cửa một cửa liên thông trong giải quyết thủ tục hành chính theo quy định của pháp luật; bảo đảm điều kiện vật chất kỹ thuật phương tiện làm việc cho hoạt động của lãnh đạo bộ và các cơ quan tổ chức đơn vị sử dụng ngân sách qua văn phòng bộ văn phòng bộ có con dấu riêng và có tài khoản để giao dịch theo quy định của pháp luật</code> | <code>['passage: vị trí và chức năng văn phòng bộ có con dấu và tài khoản riêng để giao dịch theo quy định của pháp luật', 'passage: điều 9 ủy ban văn hóa giáo dục thanh niên thiếu niên và nhi đồng xử lý các đơn thư có nội dung sau: 1 kiến nghị khiếu nại về văn hóa thông tin giáo dục đào tạo thể thao báo chí phát thanh truyền hình quảng cáo thực hiện chính sách đối với thanh niên thiếu niên và nhi đồng và các kiến nghị khiếu nại khác thuộc lĩnh vực ủy ban phụ trách; 2 tố cáo cơ quan tổ chức cá nhân vi phạm pháp luật trong lĩnh vực quy định tại khoản 1 điều này', 'passage: điều 36 nguyên tắc đăng ký hành nghề 1 người hành nghề được đăng ký hành nghề tại nhiều cơ sở khám bệnh chữa bệnh nhưng không được trùng thời gian khám bệnh chữa bệnh giữa các cơ sở khám bệnh chữa bệnh 2 người hành nghề được đăng ký làm việc tại một hoặc nhiều vị trí chuyên môn sau đây trong cùng một cơ sở khám bệnh chữa bệnh nhưng phải bảo đảm chất lượng công việc tại các vị trí được phân công: a) khám bệnh chữa bệnh theo ...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss |
|:------:|:-----:|:-------------:|
| 0.0094 | 100 | 0.4275 |
| 0.0188 | 200 | 0.1826 |
| 0.0282 | 300 | 0.089 |
| 0.0376 | 400 | 0.0564 |
| 0.0470 | 500 | 0.0427 |
| 0.0564 | 600 | 0.0308 |
| 0.0658 | 700 | 0.0377 |
| 0.0752 | 800 | 0.0348 |
| 0.0845 | 900 | 0.0481 |
| 0.0939 | 1000 | 0.0552 |
| 0.1033 | 1100 | 0.0505 |
| 0.1127 | 1200 | 0.0431 |
| 0.1221 | 1300 | 0.0497 |
| 0.1315 | 1400 | 0.0455 |
| 0.1409 | 1500 | 0.0529 |
| 0.1503 | 1600 | 0.055 |
| 0.1597 | 1700 | 0.0478 |
| 0.1691 | 1800 | 0.0472 |
| 0.1785 | 1900 | 0.0393 |
| 0.1879 | 2000 | 0.0422 |
| 0.1973 | 2100 | 0.0453 |
| 0.2067 | 2200 | 0.0403 |
| 0.2161 | 2300 | 0.0522 |
| 0.2255 | 2400 | 0.052 |
| 0.2349 | 2500 | 0.0492 |
| 0.2442 | 2600 | 0.0631 |
| 0.2536 | 2700 | 0.0494 |
| 0.2630 | 2800 | 0.0405 |
| 0.2724 | 2900 | 0.046 |
| 0.2818 | 3000 | 0.05 |
| 0.2912 | 3100 | 0.0469 |
| 0.3006 | 3200 | 0.0606 |
| 0.3100 | 3300 | 0.0442 |
| 0.3194 | 3400 | 0.0477 |
| 0.3288 | 3500 | 0.0432 |
| 0.3382 | 3600 | 0.0344 |
| 0.3476 | 3700 | 0.0425 |
| 0.3570 | 3800 | 0.0365 |
| 0.3664 | 3900 | 0.0303 |
| 0.3758 | 4000 | 0.0543 |
| 0.3852 | 4100 | 0.0379 |
| 0.3946 | 4200 | 0.0345 |
| 0.4039 | 4300 | 0.0565 |
| 0.4133 | 4400 | 0.032 |
| 0.4227 | 4500 | 0.0411 |
| 0.4321 | 4600 | 0.0305 |
| 0.4415 | 4700 | 0.0322 |
| 0.4509 | 4800 | 0.0272 |
| 0.4603 | 4900 | 0.0315 |
| 0.4697 | 5000 | 0.0272 |
| 0.4791 | 5100 | 0.0468 |
| 0.4885 | 5200 | 0.0401 |
| 0.4979 | 5300 | 0.0359 |
| 0.5073 | 5400 | 0.0292 |
| 0.5167 | 5500 | 0.051 |
| 0.5261 | 5600 | 0.0433 |
| 0.5355 | 5700 | 0.0273 |
| 0.5449 | 5800 | 0.034 |
| 0.5543 | 5900 | 0.029 |
| 0.5636 | 6000 | 0.029 |
| 0.5730 | 6100 | 0.0391 |
| 0.5824 | 6200 | 0.0277 |
| 0.5918 | 6300 | 0.0415 |
| 0.6012 | 6400 | 0.03 |
| 0.6106 | 6500 | 0.0415 |
| 0.6200 | 6600 | 0.0499 |
| 0.6294 | 6700 | 0.0411 |
| 0.6388 | 6800 | 0.04 |
| 0.6482 | 6900 | 0.0378 |
| 0.6576 | 7000 | 0.0355 |
| 0.6670 | 7100 | 0.0364 |
| 0.6764 | 7200 | 0.035 |
| 0.6858 | 7300 | 0.0243 |
| 0.6952 | 7400 | 0.0264 |
| 0.7046 | 7500 | 0.0391 |
| 0.7140 | 7600 | 0.0344 |
| 0.7233 | 7700 | 0.0338 |
| 0.7327 | 7800 | 0.0352 |
| 0.7421 | 7900 | 0.0238 |
| 0.7515 | 8000 | 0.0431 |
| 0.7609 | 8100 | 0.0243 |
| 0.7703 | 8200 | 0.0244 |
| 0.7797 | 8300 | 0.0335 |
| 0.7891 | 8400 | 0.0299 |
| 0.7985 | 8500 | 0.0281 |
| 0.8079 | 8600 | 0.0353 |
| 0.8173 | 8700 | 0.0312 |
| 0.8267 | 8800 | 0.0226 |
| 0.8361 | 8900 | 0.0247 |
| 0.8455 | 9000 | 0.0303 |
| 0.8549 | 9100 | 0.0236 |
| 0.8643 | 9200 | 0.0256 |
| 0.8736 | 9300 | 0.0205 |
| 0.8830 | 9400 | 0.0332 |
| 0.8924 | 9500 | 0.0226 |
| 0.9018 | 9600 | 0.0263 |
| 0.9112 | 9700 | 0.0346 |
| 0.9206 | 9800 | 0.0247 |
| 0.9300 | 9900 | 0.0322 |
| 0.9394 | 10000 | 0.0433 |
| 0.9488 | 10100 | 0.042 |
| 0.9582 | 10200 | 0.0283 |
| 0.9676 | 10300 | 0.0357 |
| 0.9770 | 10400 | 0.0327 |
| 0.9864 | 10500 | 0.0189 |
| 0.9958 | 10600 | 0.032 |
</details>
### Framework Versions
- Python: 3.11.13
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.8.1
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
joanna302/Qwen3-8B-Base_pag_alpaca_0.66_part_SFT_0.0002
|
joanna302
| 2025-08-20T14:26:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"unsloth",
"sft",
"trl",
"conversational",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T11:53:18Z |
---
base_model: unsloth/Qwen3-8B-Base
library_name: transformers
model_name: Qwen3-8B-Base_pag_alpaca_0.66_part_SFT_0.0002
tags:
- generated_from_trainer
- unsloth
- sft
- trl
licence: license
---
# Model Card for Qwen3-8B-Base_pag_alpaca_0.66_part_SFT_0.0002
This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_pag_alpaca_0.66_part_SFT_0.0002", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_pag_alpaca_0.66_part_SFT_0.0002/runs/59bgfy7v)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755699736
|
Vasya777
| 2025-08-20T14:23:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:22:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aleebaster/blockassist-bc-sly_eager_boar_1755698080
|
aleebaster
| 2025-08-20T14:21:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:21:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aivoryinnovations/jay
|
aivoryinnovations
| 2025-08-20T14:21:41Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-08-20T13:23:08Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
finneganrainier/vit-detector
|
finneganrainier
| 2025-08-20T14:21:27Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T14:15:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755699647
|
lilTAT
| 2025-08-20T14:21:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:21:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kelasbgd/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vocal_scurrying_tarantula
|
kelasbgd
| 2025-08-20T14:20:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am vocal_scurrying_tarantula",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T13:03:00Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am vocal_scurrying_tarantula
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yashikatamta/ppo-LunarLander-v2
|
yashikatamta
| 2025-08-20T14:20:07Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-20T14:19:55Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v3
type: LunarLander-v3
metrics:
- type: mean_reward
value: 270.68 +/- 17.23
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v3**
This is a trained model of a **PPO** agent playing **LunarLander-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755697967
|
ihsanridzi
| 2025-08-20T14:19:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:19:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
manusiaperahu2012/blockassist-bc-roaring_long_tuna_1755697707
|
manusiaperahu2012
| 2025-08-20T14:18:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring long tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:18:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring long tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755699346
|
lilTAT
| 2025-08-20T14:16:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:16:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755699305
|
0xaoyama
| 2025-08-20T14:15:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:15:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755699146
|
lqpl
| 2025-08-20T14:14:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:13:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755697592
|
indoempatnol
| 2025-08-20T14:13:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:13:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
stephenoptins/tracy_moore_2
|
stephenoptins
| 2025-08-20T14:13:11Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-20T13:35:13Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Tracy
---
# Tracy_Moore_2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Tracy` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Tracy",
"lora_weights": "https://huggingface.co/stephenoptins/tracy_moore_2/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('stephenoptins/tracy_moore_2', weight_name='lora.safetensors')
image = pipeline('Tracy').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 3302
- Learning rate: 0.0004
- LoRA rank: 48
## Contribute your own examples
You can use the [community tab](https://huggingface.co/stephenoptins/tracy_moore_2/discussions) to add images that show off what you’ve made with this LoRA.
|
MOLUOKA/bge-reranker-large-Q8_0-GGUF
|
MOLUOKA
| 2025-08-20T14:10:57Z | 0 | 0 | null |
[
"gguf",
"mteb",
"llama-cpp",
"gguf-my-repo",
"feature-extraction",
"en",
"zh",
"base_model:BAAI/bge-reranker-large",
"base_model:quantized:BAAI/bge-reranker-large",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-08-20T14:10:52Z |
---
license: mit
language:
- en
- zh
tags:
- mteb
- llama-cpp
- gguf-my-repo
pipeline_tag: feature-extraction
base_model: BAAI/bge-reranker-large
model-index:
- name: bge-reranker-base
results:
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: None
metrics:
- type: map
value: 81.27206722525007
- type: mrr
value: 84.14238095238095
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: None
metrics:
- type: map
value: 84.10369934291236
- type: mrr
value: 86.79376984126984
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 35.4600511272538
- type: mrr
value: 34.60238095238095
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 67.27728847727172
- type: mrr
value: 77.1315192743764
---
# MOLUOKA/bge-reranker-large-Q8_0-GGUF
This model was converted to GGUF format from [`BAAI/bge-reranker-large`](https://huggingface.co/BAAI/bge-reranker-large) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/BAAI/bge-reranker-large) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo MOLUOKA/bge-reranker-large-Q8_0-GGUF --hf-file bge-reranker-large-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo MOLUOKA/bge-reranker-large-Q8_0-GGUF --hf-file bge-reranker-large-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo MOLUOKA/bge-reranker-large-Q8_0-GGUF --hf-file bge-reranker-large-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo MOLUOKA/bge-reranker-large-Q8_0-GGUF --hf-file bge-reranker-large-q8_0.gguf -c 2048
```
|
jo-mengr/mmcontext-pubmedbert-geneformer-100k_adapter
|
jo-mengr
| 2025-08-20T14:10:40Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:81143",
"loss:MultipleNegativesRankingLoss",
"code",
"dataset:jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:NeuML/pubmedbert-base-embeddings",
"base_model:finetune:NeuML/pubmedbert-base-embeddings",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-20T14:10:22Z |
---
language:
- code
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:81143
- loss:MultipleNegativesRankingLoss
base_model: NeuML/pubmedbert-base-embeddings
widget:
- source_sentence: sample_idx:census_d7d7e89c-c93a-422d-8958-9b4a90b69558_1563
sentences:
- This measurement was conducted with 10x 5' v1. Naive B cell from blood of a 26-year
old male, activated with CD3.
- sample_idx:census_d7d7e89c-c93a-422d-8958-9b4a90b69558_5036
- This measurement was conducted with 10x 5' v1. A 26-year-old male individual's
blood sample, containing naive thymus-derived CD4-positive, alpha-beta T cells,
with no activation or treatment, and in G1 phase.
- source_sentence: sample_idx:census_cf83c98a-3791-4537-bbde-a719f6d73c13_738
sentences:
- This measurement was conducted with 10x 3' v3. Blasts cells derived from the blood
of a 4-month old male.
- sample_idx:census_cf83c98a-3791-4537-bbde-a719f6d73c13_1016
- This measurement was conducted with 10x 3' v3. This is a megakaryocyte-erythroid
progenitor cell (MEP-like) derived from a 1-month-old female patient with KMT2A-rearranged
(KMT2A-r) infant acute lymphoblastic leukemia (ALL). The cell exhibits increased
lineage plasticity, downregulated steroid response pathways, and belongs to a
hematopoietic stem and progenitor-like (HSPC-like) population that forms an immunosuppressive
signaling circuit with cytotoxic lymphocytes.
- source_sentence: sample_idx:census_2872f4b0-b171-46e2-abc6-befcf6de6306_2050
sentences:
- sample_idx:census_2872f4b0-b171-46e2-abc6-befcf6de6306_1719
- This measurement was conducted with 10x 5' v2. Memory B cell derived from a 65-79
year-old male, taken from the mesenteric lymph node.
- This measurement was conducted with 10x 5' v2. IgA plasma cell sample taken from
the mesenteric lymph node of a 65-79 year-old female.
- source_sentence: sample_idx:census_3f31f8ce-bbf6-4df8-8203-aa240ed03026_299
sentences:
- This measurement was conducted with 10x 3' v3. Neuron cell type from a 50-year-old
male human cerebral cortex, specifically from the Cingulate gyrus, rostral (CgGr),
Ventral division of MFC - A24 region, with European self-reported ethnicity, analyzed
at the nucleus level.
- This measurement was conducted with 10x 3' v3. Neuron cell type from a 50-year-old
male human cerebral cortex, specifically the rostral cingulate gyrus, ventral
division of MFC, A24, with European ethnicity.
- sample_idx:census_3f31f8ce-bbf6-4df8-8203-aa240ed03026_30
- source_sentence: sample_idx:census_1b9d8702-5af8-4142-85ed-020eb06ec4f6_14644
sentences:
- sample_idx:census_1b9d8702-5af8-4142-85ed-020eb06ec4f6_16130
- This measurement was conducted with 10x 3' v3. Classical monocytes derived from
the blood of a female individual in her seventies.
- This measurement was conducted with 10x 5' v2. Sample is a CD8-positive, alpha-beta
memory T cell, specifically a cytotoxic T cell, from the lamina propria tissue
of an individual in her eighth decade of life.
datasets:
- jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: SentenceTransformer based on NeuML/pubmedbert-base-embeddings
results:
- task:
type: triplet
name: Triplet
dataset:
name: cellxgene pseudo bulk 100k multiplets natural language annotation cell
sentence 1
type: cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1
metrics:
- type: cosine_accuracy
value: 0.5162578821182251
name: Cosine Accuracy
---
# SentenceTransformer based on NeuML/pubmedbert-base-embeddings
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [NeuML/pubmedbert-base-embeddings](https://huggingface.co/NeuML/pubmedbert-base-embeddings) on the [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [NeuML/pubmedbert-base-embeddings](https://huggingface.co/NeuML/pubmedbert-base-embeddings) <!-- at revision d6eaca8254bc229f3ca42749a5510ae287eb3486 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation)
- **Language:** code
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): MMContextEncoder(
(text_encoder): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0-11): 12 x BertLayer(
(attention): BertAttention(
(self): BertSdpaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
(text_adapter): AdapterModule(
(net): Sequential(
(0): Linear(in_features=768, out_features=512, bias=True)
(1): ReLU(inplace=True)
(2): Linear(in_features=512, out_features=1024, bias=True)
(3): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(pooling): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("jo-mengr/mmcontext-pubmedbert-geneformer-100k_adapter")
# Run inference
sentences = [
'sample_idx:census_1b9d8702-5af8-4142-85ed-020eb06ec4f6_14644',
"This measurement was conducted with 10x 5' v2. Sample is a CD8-positive, alpha-beta memory T cell, specifically a cytotoxic T cell, from the lamina propria tissue of an individual in her eighth decade of life.",
"This measurement was conducted with 10x 3' v3. Classical monocytes derived from the blood of a female individual in her seventies.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, -0.2246, -0.1095],
# [-0.2246, 1.0000, 0.9513],
# [-0.1095, 0.9513, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.5163** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation
* Dataset: [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) at [9916878](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation/tree/9916878bbf20fb8f9d6a0be4c997236e027cabd4)
* Size: 81,143 training samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 | negative_2 |
|:--------|:-----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|
| type | string | string | string | string |
| details | <ul><li>min: 56 characters</li><li>mean: 58.72 characters</li><li>max: 60 characters</li></ul> | <ul><li>min: 92 characters</li><li>mean: 216.13 characters</li><li>max: 900 characters</li></ul> | <ul><li>min: 101 characters</li><li>mean: 215.14 characters</li><li>max: 870 characters</li></ul> | <ul><li>min: 56 characters</li><li>mean: 58.75 characters</li><li>max: 60 characters</li></ul> |
* Samples:
| anchor | positive | negative_1 | negative_2 |
|:--------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------|
| <code>sample_idx:census_218acb0f-9f2f-4f76-b90b-15a4b7c7f629_26009</code> | <code>This measurement was conducted with 10x 3' v2. A proliferating lymphocyte cell sample, obtained from a 34-year-old female Asian individual, derived from peripheral blood mononuclear cells.</code> | <code>This measurement was conducted with 10x 3' v2. Sample is a 25-year-old female with European ethnicity, having CD8-positive, alpha-beta T cell type. This cell type exhibits elevated expression of type 1 interferon-stimulated genes (ISGs) in monocytes, reduction of naïve CD4+ T cells correlating with monocyte ISG expression, and expansion of repertoire-restricted cytotoxic GZMH+ CD8+ T cells.</code> | <code>sample_idx:census_218acb0f-9f2f-4f76-b90b-15a4b7c7f629_14165</code> |
| <code>sample_idx:census_1b9d8702-5af8-4142-85ed-020eb06ec4f6_6333</code> | <code>This measurement was conducted with 10x 5' v1. Sample is a cell from the omentum tissue, specifically an effector memory CD4-positive, alpha-beta T cell, from a female in her sixth decade.</code> | <code>This measurement was conducted with 10x 5' v2. Conventional dendritic cell from the jejunal epithelium of a female in her eighth decade.</code> | <code>sample_idx:census_1b9d8702-5af8-4142-85ed-020eb06ec4f6_2714</code> |
| <code>sample_idx:census_adda0684-f8ea-4403-b393-2a25607077c4_271</code> | <code>This measurement was conducted with 10x 3' v3. Neuron cell type from a 29-year-old male, specifically from the thalamic complex, specifically the thalamus (THM) - posterior nuclear complex of thalamus (PoN) - medial geniculate nuclei (MG).</code> | <code>This measurement was conducted with 10x 3' v3. Neuron from the thalamic complex (thalamus, posterior nuclear complex of thalamus, medial geniculate nuclei) of a 42-year-old male, identified as a midbrain-derived inhibitory neuron.</code> | <code>sample_idx:census_adda0684-f8ea-4403-b393-2a25607077c4_425</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation
* Dataset: [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) at [9916878](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation/tree/9916878bbf20fb8f9d6a0be4c997236e027cabd4)
* Size: 9,011 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 | negative_2 |
|:--------|:-----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|
| type | string | string | string | string |
| details | <ul><li>min: 56 characters</li><li>mean: 58.73 characters</li><li>max: 60 characters</li></ul> | <ul><li>min: 99 characters</li><li>mean: 209.99 characters</li><li>max: 941 characters</li></ul> | <ul><li>min: 102 characters</li><li>mean: 213.87 characters</li><li>max: 981 characters</li></ul> | <ul><li>min: 56 characters</li><li>mean: 58.73 characters</li><li>max: 60 characters</li></ul> |
* Samples:
| anchor | positive | negative_1 | negative_2 |
|:--------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------|
| <code>sample_idx:census_0b4a15a7-4e9e-4555-9733-2423e5c66469_490</code> | <code>This measurement was conducted with 10x 3' v3. Cell sample from the cortex of kidney, taken from a 43-year-old male of European ethnicity with a reported history of kidney cancer. The cell type is identified as a kidney collecting duct intercalated cell.</code> | <code>This measurement was conducted with 10x 3' v3. Kidney collecting duct intercalated cell from a 43-year old European male with kidney cancer, taken from the cortex of kidney and cryopreserved for further analysis.</code> | <code>sample_idx:census_0b4a15a7-4e9e-4555-9733-2423e5c66469_9</code> |
| <code>sample_idx:census_4976b234-9028-4b4b-8a2f-8ac59d636219_269</code> | <code>This measurement was conducted with 10x 3' v3. Neuron cell type from a 29-year-old male cerebellum, specifically from the Cerebellar Vermis - CBV region, with European self-reported ethnicity, analyzed at the nucleus level.</code> | <code>This measurement was conducted with 10x 3' v3. Endothelial cells derived from the cerebellum (specifically, cerebellar vermis) of a 42-year-old male, classified under the vascular supercluster term.</code> | <code>sample_idx:census_4976b234-9028-4b4b-8a2f-8ac59d636219_923</code> |
| <code>sample_idx:census_44882825-0da1-4547-b721-2c6105d4a9d1_10258</code> | <code>This measurement was conducted with 10x 5' v1. Cell sample from the tonsil of a 9-year-old female with recurrent tonsillitis, characterized as a centroblast B cell with IGLC2, IGLV7-43, IGLJ3 immunoglobulin genes expressed.</code> | <code>This measurement was conducted with 10x 5' v1. Centroblast cells derived from a 3-year-old male human tonsil sample, with obstructive sleep apnea and recurrent tonsillitis, undergoing affinity maturation and differentiation into memory or plasma cells.</code> | <code>sample_idx:census_44882825-0da1-4547-b721-2c6105d4a9d1_9654</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `learning_rate`: 0.05
- `num_train_epochs`: 2
- `warmup_ratio`: 0.1
- `bf16`: True
- `gradient_checkpointing`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 0.05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: True
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | cellxgene pseudo bulk 100k multiplets natural language annotation loss | cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_1_cosine_accuracy |
|:------:|:----:|:-------------:|:----------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------:|
| 0.3155 | 100 | 4.3009 | 20.4535 | 0.5063 |
| 0.6309 | 200 | 3.2356 | 22.4190 | 0.5055 |
| 0.9464 | 300 | 2.9358 | 19.8626 | 0.5072 |
| 1.2618 | 400 | 2.7478 | 19.9669 | 0.5104 |
| 1.5773 | 500 | 2.634 | 18.4317 | 0.5134 |
| 1.8927 | 600 | 2.554 | 17.2588 | 0.5163 |
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 5.0.0
- Transformers: 4.55.0.dev0
- PyTorch: 2.5.1+cu121
- Accelerate: 1.9.0
- Datasets: 2.19.1
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755697080
|
kojeklollipop
| 2025-08-20T14:07:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:07:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755697210
|
hakimjustbao
| 2025-08-20T14:06:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:06:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755698712
|
yaelahnal
| 2025-08-20T14:06:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:06:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Alevit/act_so101_250814_lego_policy
|
Alevit
| 2025-08-20T14:04:41Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:Alevit/250814_lego",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-19T14:08:58Z |
---
datasets: Alevit/250814_lego
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- lerobot
- robotics
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
MOLUOKA/bge-reranker-large-Q4_K_M-GGUF
|
MOLUOKA
| 2025-08-20T14:04:00Z | 0 | 0 | null |
[
"gguf",
"mteb",
"llama-cpp",
"gguf-my-repo",
"feature-extraction",
"en",
"zh",
"base_model:BAAI/bge-reranker-large",
"base_model:quantized:BAAI/bge-reranker-large",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-08-20T14:03:55Z |
---
license: mit
language:
- en
- zh
tags:
- mteb
- llama-cpp
- gguf-my-repo
pipeline_tag: feature-extraction
base_model: BAAI/bge-reranker-large
model-index:
- name: bge-reranker-base
results:
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: None
metrics:
- type: map
value: 81.27206722525007
- type: mrr
value: 84.14238095238095
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: None
metrics:
- type: map
value: 84.10369934291236
- type: mrr
value: 86.79376984126984
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 35.4600511272538
- type: mrr
value: 34.60238095238095
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 67.27728847727172
- type: mrr
value: 77.1315192743764
---
# MOLUOKA/bge-reranker-large-Q4_K_M-GGUF
This model was converted to GGUF format from [`BAAI/bge-reranker-large`](https://huggingface.co/BAAI/bge-reranker-large) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/BAAI/bge-reranker-large) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo MOLUOKA/bge-reranker-large-Q4_K_M-GGUF --hf-file bge-reranker-large-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo MOLUOKA/bge-reranker-large-Q4_K_M-GGUF --hf-file bge-reranker-large-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo MOLUOKA/bge-reranker-large-Q4_K_M-GGUF --hf-file bge-reranker-large-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo MOLUOKA/bge-reranker-large-Q4_K_M-GGUF --hf-file bge-reranker-large-q4_k_m.gguf -c 2048
```
|
roeker/blockassist-bc-quick_wiry_owl_1755698561
|
roeker
| 2025-08-20T14:03:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T14:03:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jasonhuang3/bpo-qwen-2-5-7b-math-ep2-our_4_alpha_0.3_lora_28k
|
jasonhuang3
| 2025-08-20T14:02:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T17:39:46Z |
---
base_model: Qwen/Qwen2.5-Math-7B
library_name: transformers
model_name: bpo-qwen-2-5-7b-math-ep2-our_4_alpha_0.3_lora_28k
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for bpo-qwen-2-5-7b-math-ep2-our_4_alpha_0.3_lora_28k
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jasonhuang3/bpo-qwen-2-5-7b-math-ep2-our_4_alpha_0.3_lora_28k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jasonhuang3-school/huggingface/runs/jcdwzlxa)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.19.1
- Transformers: 4.53.1
- Pytorch: 2.4.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
amir-ali-ai/results
|
amir-ali-ai
| 2025-08-20T14:01:53Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:ZharfaTech/ZharfaOpen-0309",
"base_model:finetune:ZharfaTech/ZharfaOpen-0309",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T14:01:50Z |
---
base_model: ZharfaTech/ZharfaOpen-0309
library_name: transformers
model_name: results
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for results
This model is a fine-tuned version of [ZharfaTech/ZharfaOpen-0309](https://huggingface.co/ZharfaTech/ZharfaOpen-0309).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="amir-ali-ai/results", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/amirmaasoumi507-amoozesh/huggingface/runs/mw84ybv6)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
unitova/blockassist-bc-zealous_sneaky_raven_1755696710
|
unitova
| 2025-08-20T13:59:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T13:59:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Anuar123/A
|
Anuar123
| 2025-08-20T13:59:01Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-20T13:59:01Z |
---
license: apache-2.0
---
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755696509
|
coelacanthxyz
| 2025-08-20T13:58:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T13:58:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
palyafari/FeedbackClassifierGemma
|
palyafari
| 2025-08-20T13:58:21Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T13:34:40Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: FeedbackClassifierGemma
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for FeedbackClassifierGemma
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="palyafari/FeedbackClassifierGemma", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755698201
|
yaelahnal
| 2025-08-20T13:57:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T13:57:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aayushp123/whisper-large-v3-zeroth
|
aayushp123
| 2025-08-20T13:54:58Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"whisper",
"generated_from_trainer",
"region:us"
] | null | 2025-08-20T13:54:26Z |
---
tags:
- generated_from_trainer
model-index:
- name: whisper-large-v3-zeroth
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v3-zeroth
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.43.3
- Pytorch 2.4.0
- Datasets 2.20.0
- Tokenizers 0.19.1
|
coppertoy/blockassist-bc-foxy_tame_salmon_1755692925
|
coppertoy
| 2025-08-20T12:28:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"foxy tame salmon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T12:28:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- foxy tame salmon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
manancode/opus-mt-gl-pt-ctranslate2-android
|
manancode
| 2025-08-20T12:28:42Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:28:33Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-gl-pt-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-gl-pt` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-gl-pt
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
brAInwav/GLM-4.5-mlx-4Bit
|
brAInwav
| 2025-08-20T12:28:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"glm4_moe",
"text-generation",
"mlx",
"conversational",
"en",
"zh",
"base_model:zai-org/GLM-4.5",
"base_model:quantized:zai-org/GLM-4.5",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] |
text-generation
| 2025-08-20T12:01:58Z |
---
language:
- en
- zh
library_name: transformers
license: mit
pipeline_tag: text-generation
tags:
- mlx
base_model: zai-org/GLM-4.5
---
# brAInwav/GLM-4.5-mlx-4Bit
The Model [brAInwav/GLM-4.5-mlx-4Bit](https://huggingface.co/brAInwav/GLM-4.5-mlx-4Bit) was converted to MLX format from [zai-org/GLM-4.5](https://huggingface.co/zai-org/GLM-4.5) using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("brAInwav/GLM-4.5-mlx-4Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
manancode/opus-mt-gil-fr-ctranslate2-android
|
manancode
| 2025-08-20T12:27:57Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:27:48Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-gil-fr-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-gil-fr` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-gil-fr
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
2hpsatt/blockassist-bc-huge_deft_eagle_1755692812
|
2hpsatt
| 2025-08-20T12:27:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"huge deft eagle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T12:27:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- huge deft eagle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
manancode/opus-mt-gil-en-ctranslate2-android
|
manancode
| 2025-08-20T12:27:20Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:27:09Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-gil-en-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-gil-en` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-gil-en
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
lautan/blockassist-bc-gentle_patterned_goat_1755691110
|
lautan
| 2025-08-20T12:27:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle patterned goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T12:27:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle patterned goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
syuvers/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mangy_melodic_raven
|
syuvers
| 2025-08-20T12:25:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am mangy_melodic_raven",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T12:07:21Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am mangy_melodic_raven
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
manancode/opus-mt-fse-fi-ctranslate2-android
|
manancode
| 2025-08-20T12:25:16Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:25:07Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fse-fi-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fse-fi` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fse-fi
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
elsvastika/blockassist-bc-arctic_soaring_weasel_1755690277
|
elsvastika
| 2025-08-20T12:25:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"arctic soaring weasel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T12:24:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- arctic soaring weasel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
manancode/opus-mt-fr-yo-ctranslate2-android
|
manancode
| 2025-08-20T12:24:52Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:24:43Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-yo-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-yo` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-yo
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755691030
|
kojeklollipop
| 2025-08-20T12:24:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T12:24:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Rudra-madlads/AceInstruct-1.5B-Gensyn-Swarm-mottled_beaked_jaguar
|
Rudra-madlads
| 2025-08-20T12:24:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am mottled_beaked_jaguar",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T05:17:42Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am mottled_beaked_jaguar
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
manancode/opus-mt-fr-wls-ctranslate2-android
|
manancode
| 2025-08-20T12:24:16Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:24:07Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-wls-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-wls` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-wls
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-war-ctranslate2-android
|
manancode
| 2025-08-20T12:24:04Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:23:54Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-war-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-war` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-war
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-ve-ctranslate2-android
|
manancode
| 2025-08-20T12:23:40Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:23:31Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-ve-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-ve` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-ve
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-ty-ctranslate2-android
|
manancode
| 2025-08-20T12:23:16Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:23:07Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-ty-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-ty` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-ty
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
Milica-y-Angel-David-video/watch-full-original-clip
|
Milica-y-Angel-David-video
| 2025-08-20T12:23:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T12:22:55Z |
<animated-image data-catalyst=""><a href="https://cutt.ly/GrH1tFQs" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
manancode/opus-mt-fr-tum-ctranslate2-android
|
manancode
| 2025-08-20T12:22:39Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:22:31Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-tum-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-tum` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-tum
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-tl-ctranslate2-android
|
manancode
| 2025-08-20T12:21:25Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:21:16Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-tl-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-tl` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-tl
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-st-ctranslate2-android
|
manancode
| 2025-08-20T12:20:36Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:20:26Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-st-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-st` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-st
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-sm-ctranslate2-android
|
manancode
| 2025-08-20T12:19:57Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:19:46Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-sm-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-sm` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-sm
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-fr-pon-ctranslate2-android
|
manancode
| 2025-08-20T12:18:04Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-20T12:17:55Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-fr-pon-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-fr-pon` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-fr-pon
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = smp.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.