modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 18:33:19
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 18:33:14
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
hasdal/053c182c-e9ca-420e-b9fd-22199c23b1cb
|
hasdal
| 2025-08-20T08:37:48Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/smollm-1.7b-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"arxiv:1910.09700",
"region:us"
] |
text-generation
| 2025-08-20T08:37:42Z |
---
base_model: unsloth/smollm-1.7b-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/smollm-1.7b-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
lostinjamal/053c182c-e9ca-420e-b9fd-22199c23b1cb
|
lostinjamal
| 2025-08-20T08:37:25Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/smollm-1.7b-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"arxiv:1910.09700",
"region:us"
] |
text-generation
| 2025-08-20T08:37:12Z |
---
base_model: unsloth/smollm-1.7b-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/smollm-1.7b-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
sdagsadgd/blockassist-bc-sedate_squeaky_salamander_1755675680
|
sdagsadgd
| 2025-08-20T08:35:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sedate squeaky salamander",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T08:35:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sedate squeaky salamander
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chainway9/blockassist-bc-untamed_quick_eel_1755677293
|
chainway9
| 2025-08-20T08:34:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T08:34:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nurule/granite-3.3-8b-legal-lt-small
|
nurule
| 2025-08-20T08:34:56Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"granite",
"generated_from_trainer",
"base_model:ibm-granite/granite-3.3-8b-base",
"base_model:adapter:ibm-granite/granite-3.3-8b-base",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-08-20T07:08:19Z |
---
library_name: peft
license: apache-2.0
base_model: ibm-granite/granite-3.3-8b-base
tags:
- generated_from_trainer
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.8.0.dev0`
```yaml
# ===================================================================
# Axolotl Configuration for Fine-Tuning ibm-granite/granite-3.3-8b-base
# Task: Fill-in-the-Middle (FIM) for Legal Text Auto-Suggestion
# Infrastructure: 10x NVIDIA A40 GPUs on RunPod
# ===================================================================
# ---
# Section 1: Foundational Model & Tokenizer Configuration
# ---
base_model: ibm-granite/granite-3.3-8b-base # The base model to fine-tune. Chosen for its native FIM support. [3]
model_type: GraniteForCausalLM # The specific model architecture class. Found in the model's config.json. [14]
tokenizer_type: AutoTokenizer # Automatically loads the correct tokenizer, including special FIM tokens, from the model repo. [15]
trust_remote_code: true # Essential for models like Granite with custom code implementations not yet in the main transformers library. [14]
#dataset_processes: 80
# ---
# Section 2: Dataset Configuration
# This section assumes a preliminary preprocessing step has been completed.
# The 400,000 raw text files must be converted into a single.jsonl file
# where each line is a JSON object: {"text": "<FIM-formatted string>"}.
# ---
datasets:
- path: /workspace/data/sections-fim-small.jsonl # IMPORTANT: Replace with the actual path to your preprocessed dataset.
type: completion # The 'completion' type is ideal for pre-formatted text datasets. [4]
train_on_inputs: true # CRITICAL for FIM. This ensures the model learns from the entire prefix-suffix-middle structure, not just the "completion" part. [16]
# ---
# Section 3: Performance & Efficiency (QLoRA, Precision, Attention)
# ---
adapter: qlora # Enables Quantized Low-Rank Adaptation for memory-efficient fine-tuning. [6]
load_in_4bit: true # Loads the base model with weights quantized to 4-bit, the core of QLoRA. [5]
bf16: true # Use bfloat16 mixed precision. Optimal for A40 (Ampere) GPUs for speed and stability. [7, 17]
flash_attention: true # Enables Flash Attention 2 for significant speedup in the attention mechanism. [8]
# ---
# Section 4: LoRA Hyperparameters
# ---
lora_r: 64 # Rank of the LoRA matrices. A higher rank provides more capacity for adaptation.
lora_alpha: 128 # LoRA scaling factor. A common and effective heuristic is to set alpha = 2 * r.
lora_dropout: 0.05 # Dropout rate for LoRA layers to prevent overfitting.
lora_target_modules: # Target all linear layers in the attention and MLP blocks for comprehensive adaptation. [18, 19]
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
# WARNING: Do NOT use 'lora_modules_to_save'. Adding 'embed_tokens' or 'lm_head' here can cause the adapter size to bloat to several GBs, defeating the purpose of LoRA. This is unnecessary as the base tokenizer already includes FIM tokens. [20]
# ---
# Section 5: Multi-GPU Scaling with DeepSpeed
# ---
deepspeed: deepspeed_configs/zero2.json # Path to the DeepSpeed configuration file. ZeRO Stage 3 is used for maximum memory efficiency across 10 GPUs. [12]
# ---
# Section 6: Training Hyperparameters
# ---
sequence_len: 8192 # Leverage the model's long-context capability. A good balance for legal text without being excessively memory-intensive. [3]
sample_packing: false # Packs multiple short examples into a single sequence to maximize GPU utilization and training speed. [2, 21]
micro_batch_size: 8 # Per-device batch size. A value of 4 is a strong starting point for A40s with QLoRA and ZeRO-3.
gradient_accumulation_steps: 8 # Accumulate gradients over 8 steps before an optimizer update.
gradient_checkpointing: true
# Global Batch Size = micro_batch_size * num_gpus * gradient_accumulation_steps = 4 * 10 * 8 = 320. A robust size for an 8B model.
num_epochs: 1 # With a large dataset of 400k files, one epoch is often sufficient for domain adaptation and avoids overfitting.
optimizer: paged_adamw_8bit # The recommended optimizer for QLoRA, designed for memory efficiency. [5]
lr_scheduler: cosine # Cosine learning rate scheduler often leads to better final model performance.
learning_rate: 2.0e-5 # A standard and effective learning rate for Adam-based optimizers during fine-tuning.
warmup_steps: 100 # A brief warmup period to stabilize training at the start.
# ---
# Section 7: Logging & Checkpointing
# ---
output_dir: ./outputs # Directory to save checkpoints and final adapter.
val_set_size: 0.01 # Use 1% of the data (4,000 documents) for validation to monitor performance and prevent overfitting.
save_strategy: "steps" # Save checkpoints at regular step intervals.
eval_strategy: "steps" # Evaluate on the validation set at regular step intervals.
save_steps: 200 # Save a checkpoint every 200 steps.
eval_steps: 200 # Evaluate every 200 steps. This allows for selecting the best-performing checkpoint.
logging_steps: 10 # Log training metrics frequently for detailed monitoring.
# It is highly recommended to use Weights & Biases for monitoring large-scale training runs.
# Before running, execute `wandb login` and enter your API key.
wandb_project: "granite-lt-legal-fim" # Project name for W&B.
wandb_run_name: "granite-8b-fim-qlora-lt-run-1" # A descriptive name for the specific run.
report_to: "wandb"
```
</details><br>
# outputs
This model is a fine-tuned version of [ibm-granite/granite-3.3-8b-base](https://huggingface.co/ibm-granite/granite-3.3-8b-base) on the /workspace/data/sections-fim-small.jsonl dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 10
- gradient_accumulation_steps: 8
- total_train_batch_size: 640
- total_eval_batch_size: 80
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.1667 | 1 | 2.0800 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
kongleehan/my_awesome_video_cls_model
|
kongleehan
| 2025-08-20T08:34:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2025-08-20T08:34:14Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_video_cls_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_video_cls_model
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2957
- Accuracy: 0.9143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0926 | 0.25 | 300 | 1.9448 | 0.3286 |
| 0.2963 | 1.25 | 600 | 0.7159 | 0.7429 |
| 0.0149 | 2.25 | 900 | 0.5006 | 0.8571 |
| 0.0027 | 3.25 | 1200 | 0.2957 | 0.9143 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Orginals-Uppal-Farm-Girl-Viral-Video-Links/New.full.videos.Uppal.Farm.Girl.Viral.Video.Official.Tutorial
|
Orginals-Uppal-Farm-Girl-Viral-Video-Links
| 2025-08-20T08:33:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T08:33:16Z |
<a href="https://sdu.sk/AyL"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
Sunbird/qwen3-14b-ug40-sft-translation-plus-multilingual-tasks-merged
|
Sunbird
| 2025-08-20T08:32:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T08:15:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Milica-y-Angel-David/Milica.y.Angel.David.Video.Debut.Erome.Video.de.Milica.y.Angel.David.ybanez.Jugar.y.descargar
|
Milica-y-Angel-David
| 2025-08-20T08:32:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T08:31:49Z |
<a href="https://sdu.sk/AyL"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
VIDEOS-18-Shubham-Gupta-viral-video-Clips/New.full.videos.Shubham.Gupta.Viral.Video.Official.Tutorial
|
VIDEOS-18-Shubham-Gupta-viral-video-Clips
| 2025-08-20T08:31:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T08:31:19Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
vuitton/LouisVuitton_model5
|
vuitton
| 2025-08-20T08:30:43Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-20T08:24:45Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
BootesVoid/cm8tb7xkk0000wzj24pkk2m5g_cmejoukqo0u4jrts8h12d5rpm
|
BootesVoid
| 2025-08-20T08:29:07Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-20T08:29:05Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: G3RMNGRL
---
# Cm8Tb7Xkk0000Wzj24Pkk2M5G_Cmejoukqo0U4Jrts8H12D5Rpm
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `G3RMNGRL` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "G3RMNGRL",
"lora_weights": "https://huggingface.co/BootesVoid/cm8tb7xkk0000wzj24pkk2m5g_cmejoukqo0u4jrts8h12d5rpm/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cm8tb7xkk0000wzj24pkk2m5g_cmejoukqo0u4jrts8h12d5rpm', weight_name='lora.safetensors')
image = pipeline('G3RMNGRL').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cm8tb7xkk0000wzj24pkk2m5g_cmejoukqo0u4jrts8h12d5rpm/discussions) to add images that show off what you’ve made with this LoRA.
|
evanurasyifa-Official-video-Clip-hq/Original.New.full.videos.evanurasyifa.Viral.Video.Official.Tutorial
|
evanurasyifa-Official-video-Clip-hq
| 2025-08-20T08:25:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T08:25:13Z |
<a href="https://sdu.sk/AyL"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755676741
|
quantumxnode
| 2025-08-20T08:25:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T08:25:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lavinzco/blockassist-bc-thick_climbing_giraffe_1755674642
|
lavinzco
| 2025-08-20T08:23:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thick climbing giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T08:23:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thick climbing giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Archita-Phukan-Viral-full-Video-hq/New.full.Videos.Archita.Phukan.Viral.Video.New.MMS.Original
|
Archita-Phukan-Viral-full-Video-hq
| 2025-08-20T08:23:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T08:23:36Z |
<a href="https://sdu.sk/AyL"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
18-Clip-Sophie-Rain-Viral-video-original/New.full.videos.Sophie.Rain.Spiderman.Viral.Video.Official.Tutorial
|
18-Clip-Sophie-Rain-Viral-video-original
| 2025-08-20T08:20:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T08:20:45Z |
<a href="https://sdu.sk/AyL"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
koloni/blockassist-bc-deadly_graceful_stingray_1755676489
|
koloni
| 2025-08-20T08:20:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T08:20:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755676297
|
katanyasekolah
| 2025-08-20T08:19:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T08:19:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
se-filtra-video-de-milica-y-angel-david/ver.Milica.y.angel.david.video.erome
|
se-filtra-video-de-milica-y-angel-david
| 2025-08-20T08:19:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T08:19:34Z |
<a href="https://sdu.sk/AyL"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755676360
|
kojeklollipop
| 2025-08-20T08:18:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T08:18:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755676082
|
vwzyrraz7l
| 2025-08-20T08:17:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T08:17:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Original-Uppal-Farm-Girl-Viral-Video-Clips/New.full.videos.Uppal.Farm.Girl.Viral.Video.Official.Tutorial
|
Original-Uppal-Farm-Girl-Viral-Video-Clips
| 2025-08-20T08:17:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T08:16:59Z |
<a href="https://sdu.sk/AyL"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
PavanSakthivel/q-FrozenLake-v1-4x4-noSlippery
|
PavanSakthivel
| 2025-08-20T08:16:58Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-20T08:16:54Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="PavanSakthivel/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Orginal-afrin-apu-viral-video-link/New.full.videos.afrin.apu.Viral.Video.Official.Tutorial
|
Orginal-afrin-apu-viral-video-link
| 2025-08-20T08:15:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T08:15:25Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
rourkerhotmail1/blockassist-bc-stalking_scruffy_walrus_1755675437
|
rourkerhotmail1
| 2025-08-20T08:15:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stalking scruffy walrus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T08:15:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stalking scruffy walrus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VIDEOS-18-prabh-sandhu-viral-video-Clip/New.full.videos.prabh.sandhu.Viral.Video.Official.Tutorial
|
VIDEOS-18-prabh-sandhu-viral-video-Clip
| 2025-08-20T08:14:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T08:14:24Z |
<a href="https://sdu.sk/AyL"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
coppertoy/blockassist-bc-dappled_purring_bobcat_1755677667
|
coppertoy
| 2025-08-20T08:14:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dappled purring bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T08:14:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dappled purring bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
TonevitItaly/TonevitItaly
|
TonevitItaly
| 2025-08-20T08:14:28Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-20T08:13:40Z |
---
license: apache-2.0
---
Cos'è Tonevit?
Tonevit Capsula è una capsula per l'ipertensione progettata per aiutare a mantenere livelli di pressione sanguigna sani in modo sicuro ed efficace. È sviluppata per le persone che desiderano prendersi cura della propria salute cardiovascolare riducendo al contempo i rischi legati all'ipertensione. A differenza dei trattamenti chimici intensivi che a volte possono avere effetti collaterali, Tonevit Pillole è formulato per agire delicatamente sull'organismo, rendendolo una scelta affidabile per l'uso quotidiano. Offre un supporto a lungo termine per le persone che desiderano bilanciare la pressione sanguigna e proteggere la salute del cuore in modo naturale.
Sito ufficiale:<a href="https://www.nutritionsee.com/tonevitaly">www.Tonevit.com</a>
<p><a href="https://www.nutritionsee.com/tonevitaly"> <img src="https://www.nutritionsee.com/wp-content/uploads/2025/08/Tonevit-Italy.png" alt="enter image description here"> </a></p>
<a href="https://www.nutritionsee.com/tonevitaly">Acquista ora!! Clicca sul link qui sotto per maggiori informazioni e ottieni subito il 50% di sconto... Affrettati</a>
Sito ufficiale:<a href="https://www.nutritionsee.com/tonevitaly">www.Tonevit.com</a>
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755676079
|
indoempatnol
| 2025-08-20T08:13:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T08:13:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ypszn/blockassist-bc-yapping_pawing_worm_1755677523
|
ypszn
| 2025-08-20T08:13:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping pawing worm",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T08:13:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping pawing worm
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-deft_miniature_chinchilla_1755677593
|
AnerYubo
| 2025-08-20T08:13:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deft miniature chinchilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T08:13:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deft miniature chinchilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-chattering_regal_bat_1755677589
|
AnerYubo
| 2025-08-20T08:13:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"chattering regal bat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T08:13:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- chattering regal bat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
minchai23/test_dataset_modify_time
|
minchai23
| 2025-08-20T08:12:54Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-20T08:12:54Z |
---
license: apache-2.0
---
|
insomniaclivec1/blockassist-bc-unseen_marine_mandrill_1755675388
|
insomniaclivec1
| 2025-08-20T08:12:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"unseen marine mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T08:12:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- unseen marine mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
clairedhx/mistral7b-labels2codes-lora
|
clairedhx
| 2025-08-20T08:12:07Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"region:us"
] |
text-generation
| 2025-08-20T07:57:35Z |
---
base_model: mistralai/Mistral-7B-Instruct-v0.3
library_name: peft
model_name: mistral7b_labels2codes_lora
tags:
- base_model:adapter:mistralai/Mistral-7B-Instruct-v0.3
- lora
- sft
- transformers
- trl
licence: license
pipeline_tag: text-generation
---
# Model Card for mistral7b_labels2codes_lora
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3).
It has been trained using [TRL](https://github.com/huggingface/trl).
Mistral-7B Instruct — LoRA (ICD-10 labels → codes)
This LoRA adapter fine-tunes mistralai/Mistral-7B-Instruct-v0.3 to map French ICD-10 diagnostic labels (synonyms) to their corresponding codes (dot-less, up to 5 chars).
## Quick start
```python
from transformers import pipeline
question = "Libellé: Antécédent bronchite chronique"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
It was trained via supervised fine-tuning (QLoRA) on an instruction dataset built from (label, code) pairs (instruct_labels2codes) derived from a curated ICD-10 synonyms table created from the webscrapinng of aideaucodage.fr .
### Framework versions
- PEFT 0.17.0
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755675981
|
calegpedia
| 2025-08-20T08:11:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T08:11:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755677357
|
yaelahnal
| 2025-08-20T08:10:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T08:10:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
flemmingpetter2/blockassist-bc-hardy_subtle_snake_1755675346
|
flemmingpetter2
| 2025-08-20T08:08:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hardy subtle snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T08:08:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hardy subtle snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755675560
|
helmutsukocok
| 2025-08-20T08:07:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T08:07:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dongjuu/qwen3-1.7b-base-MED-Instruct
|
dongjuu
| 2025-08-20T08:06:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T08:05:35Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Orginal-Arisleidy-Viral-Video-Clip/New.full.videos.Arisleidy.Viral.Video.Official.Tutorial
|
Orginal-Arisleidy-Viral-Video-Clip
| 2025-08-20T08:06:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T08:06:04Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/3ckkv2u7?viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
chainway9/blockassist-bc-untamed_quick_eel_1755675444
|
chainway9
| 2025-08-20T08:03:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T08:03:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ypszn/blockassist-bc-yapping_pawing_worm_1755676956
|
ypszn
| 2025-08-20T08:03:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping pawing worm",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T08:03:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping pawing worm
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Kokoutou/soundsright_dn_2008_4
|
Kokoutou
| 2025-08-20T08:02:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T07:02:42Z |
# Container Template for SoundsRight Subnet Miners
This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/SoundsRightSubnet). The branches `DENOISING_16000HZ` and `DEREVERBERATION_16000HZ` contain SGMSE fitted with the approrpriate checkpoints for denoising and dereverberation tasks at 16kHz, respectively.
This container has only been tested with **Ubuntu 24.04** and **CUDA 12.6**. It may run on other configurations, but it is not guaranteed.
To run the container, first configure NVIDIA Container Toolkit and generate a CDI specification. Follow the instructions to download the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with Apt.
Next, follow the instructions for [generating a CDI specification](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html).
Verify that the CDI specification was done correctly with:
```
$ nvidia-ctk cdi list
```
You should see this in your output:
```
nvidia.com/gpu=all
nvidia.com/gpu=0
```
If you are running podman as root, run the following command to start the container:
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --user root --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
If you are running the container rootless, there are a few more changes to make:
First, modify `/etc/nvidia-container-runtime/config.toml` and set the following parameters:
```
[nvidia-container-cli]
no-cgroups = true
[nvidia-container-runtime]
debug = "/tmp/nvidia-container-runtime.log"
```
You can also run the following command to achieve the same result:
```
$ sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place
```
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --volume /usr/local/cuda-12.6:/usr/local/cuda-12.6 --user 10002:10002 --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
Running the container will spin up an API with the following endpoints:
1. `/status/` : Communicates API status
2. `/prepare/` : Download model checkpoint and initialize model
3. `/upload-audio/` : Upload audio files, save to noisy audio directory
4. `/enhance/` : Initialize model, enhance audio files, save to enhanced audio directory
5. `/download-enhanced/` : Download enhanced audio files
By default the API will use host `0.0.0.0` and port `6500`.
### References
1. **Welker, Simon; Richter, Julius; Gerkmann, Timo**
*Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain*.
Proceedings of *Interspeech 2022*, 2022, pp. 2928–2932.
[DOI: 10.21437/Interspeech.2022-10653](https://doi.org/10.21437/Interspeech.2022-10653)
2. **Richter, Julius; Welker, Simon; Lemercier, Jean-Marie; Lay, Bunlong; Gerkmann, Timo**
*Speech Enhancement and Dereverberation with Diffusion-based Generative Models*.
*IEEE/ACM Transactions on Audio, Speech, and Language Processing*, Vol. 31, 2023, pp. 2351–2364.
[DOI: 10.1109/TASLP.2023.3285241](https://doi.org/10.1109/TASLP.2023.3285241)
3. **Richter, Julius; Wu, Yi-Chiao; Krenn, Steven; Welker, Simon; Lay, Bunlong; Watanabe, Shinjii; Richard, Alexander; Gerkmann, Timo**
*EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation*.
Proceedings of *ISCA Interspeech*, 2024, pp. 4873–4877.
|
Mostefa-Terbeche/diabetic-retinopathy-messidor-efficientnet_b3-gentle-20250724-193632
|
Mostefa-Terbeche
| 2025-08-20T08:02:10Z | 0 | 0 | null |
[
"diabetic-retinopathy",
"medical-imaging",
"pytorch",
"computer-vision",
"retinal-imaging",
"dataset:messidor",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-08-20T07:40:38Z |
---
license: apache-2.0
tags:
- diabetic-retinopathy
- medical-imaging
- pytorch
- computer-vision
- retinal-imaging
datasets:
- messidor
metrics:
- accuracy
- quadratic-kappa
- auc
model-index:
- name: messidor_efficientnet_b3_gentle
results:
- task:
type: image-classification
name: Diabetic Retinopathy Classification
dataset:
type: messidor
name: MESSIDOR
metrics:
- type: accuracy
value: 0.25287356321839083
- type: quadratic-kappa
value: 0.5254369364354893
---
# Diabetic Retinopathy Classification Model
## Model Description
This model is trained for diabetic retinopathy classification using the efficientnet_b3 architecture on the messidor dataset with gentle preprocessing.
## Model Details
- **Architecture**: efficientnet_b3
- **Dataset**: messidor
- **Preprocessing**: gentle
- **Training Date**: 20250724-193632
- **Task**: 5-class diabetic retinopathy grading (0-4)
- **Directory**: messidor_efficientnet_b3_20250724-193632_new
## Performance
- **Test Accuracy**: 0.25287356321839083
- **Test Quadratic Kappa**: 0.5254369364354893
- **Validation Kappa**: 0.5254369364354893
## Usage
```python
import torch
from huggingface_hub import hf_hub_download
# Download model
model_path = hf_hub_download(
repo_id="your-username/diabetic-retinopathy-messidor-efficientnet_b3-gentle",
filename="model_best.pt"
)
# Load model
model = torch.load(model_path, map_location='cpu')
```
## Classes
- 0: No DR (No diabetic retinopathy)
- 1: Mild DR (Mild non-proliferative diabetic retinopathy)
- 2: Moderate DR (Moderate non-proliferative diabetic retinopathy)
- 3: Severe DR (Severe non-proliferative diabetic retinopathy)
- 4: Proliferative DR (Proliferative diabetic retinopathy)
## Citation
If you use this model, please cite your research paper/thesis.
|
milliarderdol/blockassist-bc-roaring_rough_scorpion_1755674877
|
milliarderdol
| 2025-08-20T08:01:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring rough scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T08:01:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring rough scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rbelanec/train_qnli_1755619685
|
rbelanec
| 2025-08-20T08:01:41Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"prefix-tuning",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-08-19T16:11:41Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- llama-factory
- prefix-tuning
- generated_from_trainer
model-index:
- name: train_qnli_1755619685
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_qnli_1755619685
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the qnli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1737
- Num Input Tokens Seen: 94426336
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 123
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 4
- total_eval_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:------:|:---------------:|:-----------------:|
| 0.1443 | 0.5000 | 11784 | 0.1060 | 4726400 |
| 0.0196 | 1.0000 | 23568 | 0.0572 | 9444272 |
| 0.1075 | 1.5001 | 35352 | 0.0433 | 14172240 |
| 0.011 | 2.0001 | 47136 | 0.0372 | 18886368 |
| 0.0451 | 2.5001 | 58920 | 0.0397 | 23595456 |
| 0.0107 | 3.0001 | 70704 | 0.0555 | 28323552 |
| 0.0558 | 3.5001 | 82488 | 0.0385 | 33045520 |
| 0.0713 | 4.0002 | 94272 | 0.0360 | 37766912 |
| 0.0149 | 4.5002 | 106056 | 0.0393 | 42486256 |
| 0.0617 | 5.0002 | 117840 | 0.0394 | 47210640 |
| 0.0039 | 5.5002 | 129624 | 0.0463 | 51929904 |
| 0.0071 | 6.0003 | 141408 | 0.0468 | 56656208 |
| 0.0586 | 6.5003 | 153192 | 0.0671 | 61382064 |
| 0.0014 | 7.0003 | 164976 | 0.0653 | 66103104 |
| 0.0001 | 7.5003 | 176760 | 0.0901 | 70824800 |
| 0.0003 | 8.0003 | 188544 | 0.0864 | 75545952 |
| 0.0 | 8.5004 | 200328 | 0.1408 | 80268160 |
| 0.0 | 9.0004 | 212112 | 0.1439 | 84990112 |
| 0.0 | 9.5004 | 223896 | 0.1708 | 89707216 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mradermacher/BiggerCoQ-Qwen3-10b-GGUF
|
mradermacher
| 2025-08-20T08:00:48Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:KaraKaraWitch/BiggerCoQ-Qwen3-10b",
"base_model:quantized:KaraKaraWitch/BiggerCoQ-Qwen3-10b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-20T06:17:16Z |
---
base_model: KaraKaraWitch/BiggerCoQ-Qwen3-10b
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/KaraKaraWitch/BiggerCoQ-Qwen3-10b
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#BiggerCoQ-Qwen3-10b-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.Q2_K.gguf) | Q2_K | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.Q3_K_S.gguf) | Q3_K_S | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.Q3_K_M.gguf) | Q3_K_M | 5.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.Q3_K_L.gguf) | Q3_K_L | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.IQ4_XS.gguf) | IQ4_XS | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.Q4_K_S.gguf) | Q4_K_S | 6.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.Q4_K_M.gguf) | Q4_K_M | 6.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.Q5_K_S.gguf) | Q5_K_S | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.Q5_K_M.gguf) | Q5_K_M | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.Q6_K.gguf) | Q6_K | 9.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.Q8_0.gguf) | Q8_0 | 11.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/BiggerCoQ-Qwen3-10b-GGUF/resolve/main/BiggerCoQ-Qwen3-10b.f16.gguf) | f16 | 21.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
CoDiEmb/CoDi-MiniCPM_sentence_transformers
|
CoDiEmb
| 2025-08-20T07:59:30Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"minicpm",
"sentence-similarity",
"feature-extraction",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-20T07:48:03Z |
---
language: []
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
widget: []
datasets: []
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 2304-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 1024 tokens
- **Output Dimensionality:** 2304 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: MiniCPMModel
(1): Pooling({'word_embedding_dimension': 2304, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': False})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 2304]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.10
- Sentence Transformers: 3.0.1
- Transformers: 4.51.3
- PyTorch: 2.2.1+cu118
- Accelerate:
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
jfang/crater-intelli-v1-vit-b-256-0820
|
jfang
| 2025-08-20T07:58:37Z | 0 | 0 |
pytorch
|
[
"pytorch",
"crater_embedding",
"mars",
"crater",
"embedding",
"retrieval",
"contrastive-learning",
"vision",
"feature-extraction",
"en",
"dataset:mars-crater-catalog",
"license:apache-2.0",
"region:us"
] |
feature-extraction
| 2025-08-20T07:43:07Z |
---
license: apache-2.0
tags:
- mars
- crater
- embedding
- retrieval
- contrastive-learning
- vision
- feature-extraction
datasets:
- mars-crater-catalog
language:
- en
library_name: pytorch
pipeline_tag: feature-extraction
---
# Crater Intelligence v1 - Mars Crater Embedding Model
## Model Description
This model generates 256-dimensional embeddings for Mars crater images, trained using multi-crop contrastive learning with hard negative mining. It's designed for instance-level crater retrieval and identification tasks.
### Architecture
- **Backbone**: Vision Transformer (ViT-B/16) pretrained with Mars MAE
- **Input**: Single-channel grayscale crater images (224×224)
- **Output**: 256-dimensional normalized embeddings
- **Training**: Multi-crop contrastive learning (DINO/SwAV style)
## Key Features
- **Instance-level understanding**: Distinguishes individual craters even when visually similar
- **Part-to-whole matching**: Recognizes partial crater views (rims, quadrants)
- **Scale invariance**: Robust to different crater sizes and zoom levels
- **Mars-specific**: Pretrained on Mars imagery for optimal performance
## Installation
```bash
pip install torch torchvision timm huggingface_hub
```
## Usage
### Quick Start
```python
import torch
from huggingface_hub import hf_hub_download
# Download model
model_path = hf_hub_download(
repo_id="jfang/crater-intelli-v1-vit-b-256-0820",
filename="pytorch_model.bin"
)
# Load model (simple version)
import timm
import torch.nn as nn
class CraterEmbedder(nn.Module):
def __init__(self, model_path):
super().__init__()
# Create ViT backbone
self.backbone = timm.create_model(
'vit_base_patch16_224',
in_chans=1,
num_classes=0,
global_pool='token'
)
# Projection head
self.proj = nn.Sequential(
nn.Linear(768, 1024),
nn.GELU(),
nn.Linear(1024, 256)
)
# Load weights
state_dict = torch.load(model_path, map_location='cpu')
self.load_state_dict(state_dict)
def forward(self, x):
# x: [B, 1, 224, 224]
features = self.backbone(x)
embeddings = self.proj(features)
# L2 normalize
return torch.nn.functional.normalize(embeddings, p=2, dim=-1)
# Initialize model
model = CraterEmbedder(model_path)
model.eval()
# Process crater image
from PIL import Image
import torchvision.transforms as T
transform = T.Compose([
T.Grayscale(num_output_channels=1),
T.Resize((224, 224)),
T.ToTensor(),
T.Normalize(mean=[0.5], std=[0.25])
])
# Load your crater image
img = Image.open("crater.jpg")
img_tensor = transform(img).unsqueeze(0) # [1, 1, 224, 224]
# Get embedding
with torch.no_grad():
embedding = model(img_tensor) # [1, 256]
print(f"Embedding shape: {embedding.shape}")
print(f"Embedding norm: {embedding.norm():.3f}") # Should be ~1.0
```
### Advanced Usage - Crater Retrieval
```python
import numpy as np
import torch
import torch.nn.functional as F
from typing import List, Tuple
class CraterRetriever:
def __init__(self, model):
self.model = model
self.model.eval()
self.gallery_embeddings = None
self.gallery_ids = None
def build_gallery(self, images: List[torch.Tensor], crater_ids: List[str]):
"""Build gallery of crater embeddings."""
embeddings = []
with torch.no_grad():
for img_batch in torch.split(torch.stack(images), 32):
emb = self.model(img_batch)
embeddings.append(emb)
self.gallery_embeddings = torch.cat(embeddings, dim=0)
self.gallery_ids = crater_ids
def retrieve(self, query_image: torch.Tensor, k: int = 10) -> List[Tuple[str, float]]:
"""Retrieve k most similar craters."""
with torch.no_grad():
query_emb = self.model(query_image.unsqueeze(0))
# Compute cosine similarities
similarities = F.cosine_similarity(
query_emb.unsqueeze(1),
self.gallery_embeddings.unsqueeze(0),
dim=2
).squeeze(0)
# Get top-k
topk_sims, topk_indices = similarities.topk(k)
results = []
for sim, idx in zip(topk_sims, topk_indices):
results.append((self.gallery_ids[idx], sim.item()))
return results
# Example usage
retriever = CraterRetriever(model)
# Build gallery from your crater database
gallery_images = [...] # List of preprocessed crater tensors
gallery_ids = [...] # List of crater IDs
retriever.build_gallery(gallery_images, gallery_ids)
# Query with a new crater
query = transform(Image.open("query_crater.jpg")).unsqueeze(0)
results = retriever.retrieve(query, k=5)
for crater_id, similarity in results:
print(f"Crater {crater_id}: {similarity:.3f}")
```
### Batch Processing
```python
def process_crater_batch(model, image_paths: List[str], batch_size: int = 32):
"""Process multiple crater images efficiently."""
embeddings = []
for i in range(0, len(image_paths), batch_size):
batch_paths = image_paths[i:i+batch_size]
batch_tensors = []
for path in batch_paths:
img = Image.open(path)
img_tensor = transform(img)
batch_tensors.append(img_tensor)
batch = torch.stack(batch_tensors)
with torch.no_grad():
batch_embeddings = model(batch)
embeddings.append(batch_embeddings)
return torch.cat(embeddings, dim=0)
# Process large crater catalog
crater_paths = ["crater1.jpg", "crater2.jpg", ...]
all_embeddings = process_crater_batch(model, crater_paths)
```
## Input Requirements
- **Format**: Single-channel grayscale images
- **Size**: 224×224 pixels (will be resized if different)
- **Normalization**: Mean=0.5, Std=0.25
- **Data type**: Float32 tensors
## Performance
### Retrieval Metrics (Validation Set)
- **Whole crater R@1**: 95%+
- **Partial views**:
- Rim crops: ~35%
- Quadrant crops: ~25%
- Offset crops: ~40%
- Zoom crops: ~90%
### Training Details
- **Method**: Multi-crop contrastive learning
- **Loss**: Supervised Contrastive (SupCon)
- **Temperature**: Cosine annealing 0.1 → 0.04
- **Batch size**: 64 × (2 global + 6 local views)
- **Optimizer**: AdamW with discriminative LR
- **Backbone LR**: 5e-5 (10× slower than head)
- **Projection head LR**: 5e-4
## Limitations
1. **Mars-specific**: Trained on Mars craters, may not generalize to other planets
2. **Resolution**: Optimized for 224×224 input, very high-res details may be lost
3. **Single channel**: Expects grayscale images, not multi-spectral
4. **Crater-centered**: Best performance when crater is roughly centered
## Citation
```bibtex
@model{crater_intelligence_v1,
title={Crater Intelligence v1: Mars Crater Instance Embedding},
author={Fang, J},
year={2024},
publisher={HuggingFace},
howpublished={\url{https://huggingface.co/jfang/crater-intelli-v1-vit-b-256-0820}}
}
```
## License
Apache 2.0
## Acknowledgments
- Backbone pretrained with Mars MAE on Mars orbital imagery
- Training data from Mars crater catalogs
- Contrastive learning approach inspired by DINO/SwAV
|
WIHOW3H/my_awesome_video_cls_model
|
WIHOW3H
| 2025-08-20T07:56:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2025-08-20T07:56:31Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_video_cls_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_video_cls_model
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2573
- Accuracy: 0.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0813 | 0.25 | 300 | 1.1575 | 0.4571 |
| 1.4385 | 1.25 | 600 | 1.5501 | 0.6429 |
| 0.0222 | 2.25 | 900 | 0.4601 | 0.8429 |
| 1.2499 | 3.25 | 1200 | 0.2573 | 0.9286 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
AXERA-TECH/MiniCPM4-0.5B
|
AXERA-TECH
| 2025-08-20T07:56:47Z | 8 | 0 | null |
[
"minicpm4",
"int8",
"text-generation",
"en",
"base_model:openbmb/MiniCPM4-0.5B",
"base_model:finetune:openbmb/MiniCPM4-0.5B",
"license:mit",
"region:us"
] |
text-generation
| 2025-06-11T17:14:28Z |
---
license: mit
language:
- en
base_model:
- openbmb/MiniCPM4-0.5B
pipeline_tag: text-generation
tags:
- minicpm4
- int8
---
# MiniCPM4-0.5B-Int8
This version of MiniCPM4-0.5B has been converted to run on the Axera NPU using **w8a16** quantization.
This model has been optimized with the following LoRA:
Compatible with Pulsar2 version: 4.2(Not released yet)
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through the original repo :
https://huggingface.co/openbmb/MiniCPM4-0.5B
[Pulsar2 Link, How to Convert LLM from Huggingface to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/appendix/build_llm.html)
[AXera NPU LLM Runtime](https://github.com/AXERA-TECH/ax-llm)
## Support Platform
- AX650
- [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
- AX630C
- [爱芯派2](https://axera-pi-2-docs-cn.readthedocs.io/zh-cn/latest/index.html)
- [Module-LLM](https://docs.m5stack.com/zh_CN/module/Module-LLM)
- [LLM630 Compute Kit](https://docs.m5stack.com/zh_CN/core/LLM630%20Compute%20Kit)
|Chips|w8a16|w4a16|
|--|--|--|
|AX650| 36 tokens/sec|TBD|
|AX630C| 12 tokens/sec|TBD|
## How to use
Download all files from this repository to the device
```
root@ax650:/mnt/qtang/llm-test/minicpm4-0.5b-ctx# tree -L 1
.
|-- main_ax650
|-- main_axcl_aarch64
|-- main_axcl_x86
|-- minicpm4-0.5b-int8-ctx-ax650
|-- minicpm4_tokenizer
|-- minicpm4_tokenizer_uid.py
|-- post_config.json
|-- run_minicpm4_0.5b_int8_ctx_ax650.sh
`-- run_minicpm4_0.5b_int8_ctx_axcl_x86.sh
2 directories, 7 files
```
#### Start the Tokenizer service
Install requirement
```
pip install transformers jinja2
```
```
root@ax650:/mnt/qtang/llm-test/minicpm4-0.5b-ctx# python3 minicpm4_tokenizer_uid.py
Server running at http://0.0.0.0:12345
```
#### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro) or AX650N DEMO Board
Open another terminal and run `run_minicpm4_0.5b_int8_ctx_ax650.sh`
```
root@ax650:/mnt/qtang/llm-test/minicpm4-0.5b-ctx# ./run_minicpm4_0.5b_int8_ctx_ax650.sh
[I][ Init][ 110]: LLM init start
[I][ Init][ 34]: connect http://127.0.0.1:12345 ok
[I][ Init][ 57]: uid: c779ded0-ff14-4877-869b-1aacc948f2d8
bos_id: 1, eos_id: 73440
100% | ████████████████████████████████ | 27 / 27 [2.53s<2.53s, 10.67 count/s] init post axmodel ok,remain_cmm(4244 MB)
[I][ Init][ 188]: max_token_len : 1023
[I][ Init][ 193]: kv_cache_size : 128, kv_cache_num: 1023
[I][ Init][ 201]: prefill_token_num : 128
[I][ Init][ 205]: grp: 1, prefill_max_token_num : 1
[I][ Init][ 205]: grp: 2, prefill_max_token_num : 128
[I][ Init][ 205]: grp: 3, prefill_max_token_num : 512
[I][ Init][ 209]: prefill_max_token_num : 512
[I][ load_config][ 282]: load config:
{
"enable_repetition_penalty": false,
"enable_temperature": false,
"enable_top_k_sampling": true,
"enable_top_p_sampling": false,
"penalty_window": 20,
"repetition_penalty": 1.2,
"temperature": 0.9,
"top_k": 1,
"top_p": 0.8
}
[I][ Init][ 218]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
[I][ GenerateKVCachePrefill][ 271]: input token num : 25, prefill_split_num : 1 prefill_grpid : 2
[I][ GenerateKVCachePrefill][ 308]: input_num_token:25
[I][ main][ 230]: precompute_len: 25
[I][ main][ 231]: system_prompt: You are MiniCPM4, created by ModelBest. You are a helpful assistant.
prompt >> 你是谁?
[I][ SetKVCache][ 531]: prefill_grpid:2 kv_cache_num:128 precompute_len:25 input_num_token:12
[I][ SetKVCache][ 534]: current prefill_max_token_num:384
[I][ Run][ 660]: input token num : 12, prefill_split_num : 1
[I][ Run][ 686]: input_num_token:12
[I][ Run][ 829]: ttft: 147.65 ms
你好,我是MiniCPM系列模型,由面壁智能和OpenBMB开源社区开发。详细信息请访问https://github.com/OpenBMB/
[N][ Run][ 943]: hit eos,avg 35.75 token/s
[I][ GetKVCache][ 500]: precompute_len:162, remaining:350
prompt >> 9.9与9.11
[I][ SetKVCache][ 531]: prefill_grpid:3 kv_cache_num:512 precompute_len:162 input_num_token:17
[I][ SetKVCache][ 534]: current prefill_max_token_num:256
[I][ Run][ 660]: input token num : 17, prefill_split_num : 1
[I][ Run][ 686]: input_num_token:17
[I][ Run][ 829]: ttft: 274.38 ms
9.9比9.11大。
[N][ Run][ 943]: hit eos,avg 35.44 token/s
[I][ GetKVCache][ 500]: precompute_len:189, remaining:323
prompt >> q
root@ax650:/mnt/qtang/llm-test/minicpm4-0.5b-ctx#
```
|
crislmfroes/svla-panda-open-base-cabinet-sim-v16
|
crislmfroes
| 2025-08-20T07:56:44Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:crislmfroes/panda-open-base-cabinet-v16",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-20T07:56:34Z |
---
base_model: lerobot/smolvla_base
datasets: crislmfroes/panda-open-base-cabinet-v16
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- smolvla
- lerobot
- robotics
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
SwetaJena/llama-3.2-1B-dolphin_numbers_student_12
|
SwetaJena
| 2025-08-20T07:55:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-1B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T07:55:09Z |
---
base_model: unsloth/Llama-3.2-1B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SwetaJena
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ElToro2602/blockassist-bc-raging_prehistoric_chameleon_1755676420
|
ElToro2602
| 2025-08-20T07:54:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging prehistoric chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:54:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging prehistoric chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
joanna302/Qwen3-8B-Base_pag_alpaca_0.66_part_SFT_2e-05
|
joanna302
| 2025-08-20T07:53:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"conversational",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T09:27:16Z |
---
base_model: unsloth/Qwen3-8B-Base
library_name: transformers
model_name: Qwen3-8B-Base_pag_alpaca_0.66_part_SFT_2e-05
tags:
- generated_from_trainer
- sft
- unsloth
- trl
licence: license
---
# Model Card for Qwen3-8B-Base_pag_alpaca_0.66_part_SFT_2e-05
This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_pag_alpaca_0.66_part_SFT_2e-05", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_pag_alpaca_0.66_part_SFT_2e-05/runs/urz9gi0n)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
joanna302/Qwen3-8B-Base_pag_alpaca_0.66_part_SFT_8e-05
|
joanna302
| 2025-08-20T07:53:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"unsloth",
"conversational",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T11:50:36Z |
---
base_model: unsloth/Qwen3-8B-Base
library_name: transformers
model_name: Qwen3-8B-Base_pag_alpaca_0.66_part_SFT_8e-05
tags:
- generated_from_trainer
- sft
- trl
- unsloth
licence: license
---
# Model Card for Qwen3-8B-Base_pag_alpaca_0.66_part_SFT_8e-05
This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_pag_alpaca_0.66_part_SFT_8e-05", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_pag_alpaca_0.66_part_SFT_8e-05/runs/73ibqx5t)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
bimabk/a51ae003-de10-4a7c-80ea-f24dbec64122
|
bimabk
| 2025-08-20T07:53:08Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"text-generation",
"base_model:adapter:unsloth/SmolLM2-135M",
"dpo",
"lora",
"transformers",
"trl",
"unsloth",
"arxiv:1910.09700",
"base_model:unsloth/SmolLM2-135M",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T10:25:54Z |
---
base_model: unsloth/SmolLM2-135M
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/SmolLM2-135M
- dpo
- lora
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755674693
|
mang3dd
| 2025-08-20T07:51:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:51:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-alert_snorting_fox_1755676248
|
AnerYubo
| 2025-08-20T07:50:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"alert snorting fox",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:50:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- alert snorting fox
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755674653
|
lisaozill03
| 2025-08-20T07:50:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:50:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kevinshin/qwen3-1.7b-dpo-lr-1e-6-batch-16-epoch-1-wildchat-cw-3k
|
kevinshin
| 2025-08-20T07:48:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"dpo",
"trl",
"conversational",
"dataset:kevinshin/wildchat-creative-writing-3k-pref",
"arxiv:2305.18290",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T03:52:03Z |
---
base_model: Qwen/Qwen3-1.7B
datasets: kevinshin/wildchat-creative-writing-3k-pref
library_name: transformers
model_name: qwen3-1.7b-dpo-lr-1e-6-batch-16-epoch-1-wildchat-cw-3k
tags:
- generated_from_trainer
- dpo
- trl
licence: license
---
# Model Card for qwen3-1.7b-dpo-lr-1e-6-batch-16-epoch-1-wildchat-cw-3k
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) on the [kevinshin/wildchat-creative-writing-3k-pref](https://huggingface.co/datasets/kevinshin/wildchat-creative-writing-3k-pref) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kevinshin/qwen3-1.7b-dpo-lr-1e-6-batch-16-epoch-1-wildchat-cw-3k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/myungjune-sogang-university/general_remo_train/runs/vlm6iwxd)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.19.1
- Transformers: 4.55.0.dev0
- Pytorch: 2.6.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ahmedheakl/iter0_mm_llamafactory_20250820_114433
|
ahmedheakl
| 2025-08-20T07:48:09Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-VL-3B-Instruct",
"region:us"
] | null | 2025-08-20T07:46:33Z |
---
library_name: peft
base_model: Qwen/Qwen2.5-VL-3B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: iter0_mm_llamafactory_20250820_114433
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# iter0_mm_llamafactory_20250820_114433
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) on the infographics50 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 5
- total_train_batch_size: 20
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
joanna302/Qwen3-8B-Base_zh_ar_alpaca_0.66_part_SFT_2e-05
|
joanna302
| 2025-08-20T07:46:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"unsloth",
"sft",
"conversational",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T13:45:34Z |
---
base_model: unsloth/Qwen3-8B-Base
library_name: transformers
model_name: Qwen3-8B-Base_zh_ar_alpaca_0.66_part_SFT_2e-05
tags:
- generated_from_trainer
- trl
- unsloth
- sft
licence: license
---
# Model Card for Qwen3-8B-Base_zh_ar_alpaca_0.66_part_SFT_2e-05
This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_zh_ar_alpaca_0.66_part_SFT_2e-05", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_zh_ar_alpaca_0.66_part_SFT_2e-05/runs/g8oh4b3r)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755675892
|
yaelahnal
| 2025-08-20T07:46:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:45:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
joanna302/Qwen3-8B-Base_zh_ar_alpaca_0.66_part_SFT_8e-05
|
joanna302
| 2025-08-20T07:45:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"conversational",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T13:47:27Z |
---
base_model: unsloth/Qwen3-8B-Base
library_name: transformers
model_name: Qwen3-8B-Base_zh_ar_alpaca_0.66_part_SFT_8e-05
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for Qwen3-8B-Base_zh_ar_alpaca_0.66_part_SFT_8e-05
This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_zh_ar_alpaca_0.66_part_SFT_8e-05", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_zh_ar_alpaca_0.66_part_SFT_8e-05/runs/vuased7f)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755674272
|
kojeklollipop
| 2025-08-20T07:44:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:44:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755674028
|
hakimjustbao
| 2025-08-20T07:41:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:41:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755674102
|
calegpedia
| 2025-08-20T07:41:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:41:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Albertdebeauvais/all-MiniLM-L6-v2_bibliographie
|
Albertdebeauvais
| 2025-08-20T07:40:53Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:388038",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-07-15T09:04:27Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:388038
- loss:CosineSimilarityLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
- source_sentence: Les Chemins de l'effort, édité en 1975, Paris, éd. Actes Sud.
sentences:
- chisinau, Actes Sud, FECOURA, Dusty et VEHIER, Tyrrell, en 1915, « Les œufs d'or
de ; oa guerre ».
- Zagreb, éd. CNRS, FABRIE, Seneca, FARHAT, Hope, DE LASTIC SAINT JAL, Daniella,
1997, Poe et les enseignements de l'Est.
- (1975), Paris, Actes Sud éditions, « Les Chemins de l'effort ».
- source_sentence: par BURTSCHELL, Régine et ALESSANDRONI, Diggory, 1900, « Complément
du catalogue analytique des manuscrits de la bibliothèque d'Abbeville », Rennes,
Verlag Ferdinand Schöningh éditions.
sentences:
- « Complément du catalogue analytique des manuscrots de la bibliothèque d'Abbeville
», Rennes, Verlag Ferdinand Schöningh éditions, BURTSCHELL, Régine, sous la direction
de ALESSANDRONI, Diggory, (1900).
- Vortex, le cheval fou, publié en ; 1926, , Bordeaux, L’Harmattan.
- 1997, DEPOUMEAUX, Summer, « De chair et de lumière », Luxembourg, L’Harmattan
éditions.
- source_sentence: de Lorita, STREIFF, Petronella, MONTIALOUX, Gale, DANGOUMAU et
ed Montgomery, D AUBERT, Dean Martin, (2011), Prague, éd. Peter Lang.
sentences:
- Amiens, University of Chicago Press, GUILLION L. et LAPERDRIX K., Autres courants,
2015.
- 'Prague, éd. : Peter Lang, , (2011), "Dean Martin", pr + 20 ill.. Gale, DANGOUMAU
et Lorita, STREIFF, Petronella, MONTIALOUX, Montgomery, D AUBERT.'
- Valerie, PAIRA, Niles, AUDUBERT, 1986, Au gré des saisons, Amsterdam, Routledge.
- source_sentence: 1948, Seattle, éd. Payot & Rivages, de Trudy, SAINT-AIME, Toponymes
finnois et germaniques en Lithuanie... Remarques sur le nom de la Vistule.
sentences:
- Toponymes finnois et germaniques en Lithuanie... Remarques sur le nom de la Vistule,
en 1952, Seattle, Payot & Rivages éditions, Delia, HOZE.
- Cologne, Les Belles Lettres, Éléments de géométrie expérimentale, à l'usage des
élèves des cours professionnels et des ouvriers, avec de nombreuses applications
au trait, LAGEIX, Shelly, (1898).
- 1887., The variations of glaciers. XVI, Jessika, ANNIEL, Chisinau, éd. Stanford
University Press.
- source_sentence: BENMAMAR, A. et LUZEUX, K., JARRAND-MARTIN, S., "La science alchimique",
Master drawings, numéro 92, pages 511-649, 1904, Valence, éd. Zed Books.
sentences:
- Dublin, éd. CNRS, Les mystères de la cour de Cornouailles, N. BILLEBEAU, en 1966.
- En 1939, New York, Fayard, réactions et méthodes nouvelles d'analyse qualitative
minérale, BERTIER, R.
- édité en 2020, Alexandre, GLERAND et Ashleigh, BIZET, "Un long voyage", Reims,
Editions Payot éditions.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
- cosine_accuracy_threshold
- cosine_f1
- cosine_f1_threshold
- cosine_precision
- cosine_recall
- cosine_ap
- cosine_mcc
model-index:
- name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
results:
- task:
type: binary-classification
name: Binary Classification
dataset:
name: eval
type: eval
metrics:
- type: cosine_accuracy
value: 0.9845532980795992
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.7197951674461365
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.9822371579452713
name: Cosine F1
- type: cosine_f1_threshold
value: 0.7197951674461365
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.9880875724404379
name: Cosine Precision
- type: cosine_recall
value: 0.9764556156538339
name: Cosine Recall
- type: cosine_ap
value: 0.9978040298718638
name: Cosine Ap
- type: cosine_mcc
value: 0.9686262528236084
name: Cosine Mcc
- task:
type: binary-classification
name: Binary Classification
dataset:
name: test
type: test
metrics:
- type: cosine_accuracy
value: 0.9851563224788942
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.7434847354888916
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.9829406120055443
name: Cosine F1
- type: cosine_f1_threshold
value: 0.7414178252220154
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.9907576571735626
name: Cosine Precision
- type: cosine_recall
value: 0.975245953665503
name: Cosine Recall
- type: cosine_ap
value: 0.9978710556305371
name: Cosine Ap
- type: cosine_mcc
value: 0.9698992765132763
name: Cosine Mcc
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'BENMAMAR, A. et LUZEUX, K., JARRAND-MARTIN, S., "La science alchimique", Master drawings, numéro 92, pages 511-649, 1904, Valence, éd. Zed Books.',
'édité en 2020, Alexandre, GLERAND et Ashleigh, BIZET, "Un long voyage", Reims, Editions Payot éditions.',
'Dublin, éd. CNRS, Les mystères de la cour de Cornouailles, N. BILLEBEAU, en 1966.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Binary Classification
* Datasets: `eval` and `test`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | eval | test |
|:--------------------------|:-----------|:-----------|
| cosine_accuracy | 0.9846 | 0.9852 |
| cosine_accuracy_threshold | 0.7198 | 0.7435 |
| cosine_f1 | 0.9822 | 0.9829 |
| cosine_f1_threshold | 0.7198 | 0.7414 |
| cosine_precision | 0.9881 | 0.9908 |
| cosine_recall | 0.9765 | 0.9752 |
| **cosine_ap** | **0.9978** | **0.9979** |
| cosine_mcc | 0.9686 | 0.9699 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 388,038 training samples
* Columns: <code>text1</code>, <code>text2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | text1 | text2 | label |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 17 tokens</li><li>mean: 50.25 tokens</li><li>max: 169 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 47.08 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>0: ~57.00%</li><li>1: ~43.00%</li></ul> |
* Samples:
| text1 | text2 | label |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>(1973),. 70, p. 36-98, Revue d'histoire locale (Chevillon), 3, « La Font perduda », Berlin, éd. Maison des Sciences de l’Homme, editor Dorcas, PEDEVILLA, Alannis, GRANZOTTO, Annabel, VOYRON, Dulcie, MIGLIORI.</code> | <code>Revue d'histoire locale (Chevillon)</code> | <code>0</code> |
| <code>Revista del Instituto Egipcio de Estudios Islámicos, n°100, pages 483-496, (2006), Administration et bibliothèques, CAGLAYAN, Kaden, BOULAABI, Fredrick, WORMSER, Bea, Vienne, éd. Beacon Press.</code> | <code>WORMSER, Bea, CAGLAYAN, Kaden, ed BOULAABI, Fredrick, édité en 2006, Administration et bibliothèques, Revista del Instituto Egipcio de Estudios Islámicos,. 100,. p. 483-496, Vienne, Beacon Press.</code> | <code>1</code> |
| <code>Atlantic Charter (1941), Bulletin de la Société d'Histoire et d'Archéologie de Nantes et de Loire-Atlantique,. numéro 31, pp. 997-1125, Léontine, SCHWERDROFFER, Sandford, CHUDZIK, Metz, Zed Books éditions, 1941.</code> | <code>(1941),. n° 31, Bulletin de la Société d'Histoire et d'Archéologie de Nantes et de Loire-Atlantique, pages 997-1125, Atlantic Charter (1941), Léontine, SCHWERDROFFER, Sandford, CHUDZIK, Metz, Zed Books.</code> | <code>1</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 21,558 evaluation samples
* Columns: <code>text1</code>, <code>text2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | text1 | text2 | label |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 15 tokens</li><li>mean: 49.64 tokens</li><li>max: 145 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 46.31 tokens</li><li>max: 160 tokens</li></ul> | <ul><li>0: ~57.70%</li><li>1: ~42.30%</li></ul> |
* Samples:
| text1 | text2 | label |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>Le Progressisme, aspects doctrinaux, DURAZ, Constance, 1955, Montpellier, éd. Routledge,. vol. 1,. pp. 26-39, n°29, Journal of philosophical research.</code> | <code>1955, "Le Progressisme, aspects doctrinaux", Montpellier, Routledge, Journal of philosophical research, pp. 26-39,. volume 1, #29.</code> | <code>1</code> |
| <code>Turin, éd. Suhrkamp Verlag, #17, pages 67-111, 2, Annales d'Avignon et du Comtat Venaissin, "Faire face aux crises de colère de l'enfant et de l'adolescent", ed HERREYE, Kassidy, (2019).</code> | <code>Amsterdam, University of Minnesota Press éditions, (1968), "Ainsi de chaque jour".</code> | <code>0</code> |
| <code>« Discours et conférences sur la science et ses applications », publié en 1927, Tours, éd. Actes Sud, Cherise, THIEFIN et de Eudora, FINGERHUT et Rona, DELLAL et Josette, DEGIOANNINI.</code> | <code> Les formes verbales du conditionnel dans le vieux sanskrit , Eudora, FINGERHUT et Cherise, THIEFIN et par Rona, DELLAL, par Josette, DEGIOANNINI, Tours, Actes Sud éditions, publié en 1927.</code> | <code>0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 20
- `per_device_eval_batch_size`: 20
- `learning_rate`: 3e-05
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 20
- `per_device_eval_batch_size`: 20
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | eval_cosine_ap | test_cosine_ap |
|:------:|:-----:|:-------------:|:---------------:|:--------------:|:--------------:|
| -1 | -1 | - | - | 0.8231 | - |
| 0.0258 | 500 | 0.1033 | - | - | - |
| 0.0515 | 1000 | 0.0885 | - | - | - |
| 0.0773 | 1500 | 0.0778 | - | - | - |
| 0.1031 | 2000 | 0.0721 | - | - | - |
| 0.1289 | 2500 | 0.0697 | - | - | - |
| 0.1546 | 3000 | 0.0645 | - | - | - |
| 0.1804 | 3500 | 0.0619 | - | - | - |
| 0.2062 | 4000 | 0.0604 | - | - | - |
| 0.2319 | 4500 | 0.0569 | - | - | - |
| 0.2577 | 5000 | 0.0545 | - | - | - |
| 0.2835 | 5500 | 0.0539 | - | - | - |
| 0.3092 | 6000 | 0.0517 | - | - | - |
| 0.3350 | 6500 | 0.0506 | - | - | - |
| 0.3608 | 7000 | 0.0511 | - | - | - |
| 0.3866 | 7500 | 0.0486 | - | - | - |
| 0.4123 | 8000 | 0.0463 | - | - | - |
| 0.4381 | 8500 | 0.0463 | - | - | - |
| 0.4639 | 9000 | 0.0471 | - | - | - |
| 0.4896 | 9500 | 0.0454 | - | - | - |
| 0.5154 | 10000 | 0.0445 | - | - | - |
| 0.5412 | 10500 | 0.0455 | - | - | - |
| 0.5670 | 11000 | 0.0441 | - | - | - |
| 0.5927 | 11500 | 0.0437 | - | - | - |
| 0.6185 | 12000 | 0.0449 | - | - | - |
| 0.6443 | 12500 | 0.0413 | - | - | - |
| 0.6700 | 13000 | 0.0413 | - | - | - |
| 0.6958 | 13500 | 0.0422 | - | - | - |
| 0.7216 | 14000 | 0.0411 | - | - | - |
| 0.7473 | 14500 | 0.0404 | - | - | - |
| 0.7731 | 15000 | 0.0374 | - | - | - |
| 0.7989 | 15500 | 0.0378 | - | - | - |
| 0.8247 | 16000 | 0.0384 | - | - | - |
| 0.8504 | 16500 | 0.0389 | - | - | - |
| 0.8762 | 17000 | 0.0377 | - | - | - |
| 0.9020 | 17500 | 0.0374 | - | - | - |
| 0.9277 | 18000 | 0.0366 | - | - | - |
| 0.9535 | 18500 | 0.0368 | - | - | - |
| 0.9793 | 19000 | 0.0367 | - | - | - |
| 1.0 | 19402 | - | 0.0310 | 0.9965 | - |
| 1.0051 | 19500 | 0.0364 | - | - | - |
| 1.0308 | 20000 | 0.0323 | - | - | - |
| 1.0566 | 20500 | 0.0319 | - | - | - |
| 1.0824 | 21000 | 0.0317 | - | - | - |
| 1.1081 | 21500 | 0.0298 | - | - | - |
| 1.1339 | 22000 | 0.0336 | - | - | - |
| 1.1597 | 22500 | 0.0304 | - | - | - |
| 1.1854 | 23000 | 0.0302 | - | - | - |
| 1.2112 | 23500 | 0.031 | - | - | - |
| 1.2370 | 24000 | 0.0301 | - | - | - |
| 1.2628 | 24500 | 0.0302 | - | - | - |
| 1.2885 | 25000 | 0.0305 | - | - | - |
| 1.3143 | 25500 | 0.0293 | - | - | - |
| 1.3401 | 26000 | 0.0307 | - | - | - |
| 1.3658 | 26500 | 0.0304 | - | - | - |
| 1.3916 | 27000 | 0.03 | - | - | - |
| 1.4174 | 27500 | 0.0312 | - | - | - |
| 1.4432 | 28000 | 0.0296 | - | - | - |
| 1.4689 | 28500 | 0.0301 | - | - | - |
| 1.4947 | 29000 | 0.0295 | - | - | - |
| 1.5205 | 29500 | 0.0295 | - | - | - |
| 1.5462 | 30000 | 0.029 | - | - | - |
| 1.5720 | 30500 | 0.0295 | - | - | - |
| 1.5978 | 31000 | 0.029 | - | - | - |
| 1.6235 | 31500 | 0.029 | - | - | - |
| 1.6493 | 32000 | 0.0271 | - | - | - |
| 1.6751 | 32500 | 0.029 | - | - | - |
| 1.7009 | 33000 | 0.0278 | - | - | - |
| 1.7266 | 33500 | 0.0286 | - | - | - |
| 1.7524 | 34000 | 0.0272 | - | - | - |
| 1.7782 | 34500 | 0.0279 | - | - | - |
| 1.8039 | 35000 | 0.0285 | - | - | - |
| 1.8297 | 35500 | 0.0286 | - | - | - |
| 1.8555 | 36000 | 0.0297 | - | - | - |
| 1.8812 | 36500 | 0.0273 | - | - | - |
| 1.9070 | 37000 | 0.0269 | - | - | - |
| 1.9328 | 37500 | 0.0276 | - | - | - |
| 1.9586 | 38000 | 0.0278 | - | - | - |
| 1.9843 | 38500 | 0.0267 | - | - | - |
| 2.0 | 38804 | - | 0.0248 | 0.9976 | - |
| 2.0101 | 39000 | 0.0252 | - | - | - |
| 2.0359 | 39500 | 0.0233 | - | - | - |
| 2.0616 | 40000 | 0.0233 | - | - | - |
| 2.0874 | 40500 | 0.0236 | - | - | - |
| 2.1132 | 41000 | 0.023 | - | - | - |
| 2.1390 | 41500 | 0.0212 | - | - | - |
| 2.1647 | 42000 | 0.0233 | - | - | - |
| 2.1905 | 42500 | 0.0227 | - | - | - |
| 2.2163 | 43000 | 0.0227 | - | - | - |
| 2.2420 | 43500 | 0.0233 | - | - | - |
| 2.2678 | 44000 | 0.0241 | - | - | - |
| 2.2936 | 44500 | 0.0218 | - | - | - |
| 2.3193 | 45000 | 0.0232 | - | - | - |
| 2.3451 | 45500 | 0.0235 | - | - | - |
| 2.3709 | 46000 | 0.024 | - | - | - |
| 2.3967 | 46500 | 0.0237 | - | - | - |
| 2.4224 | 47000 | 0.0228 | - | - | - |
| 2.4482 | 47500 | 0.0231 | - | - | - |
| 2.4740 | 48000 | 0.0223 | - | - | - |
| 2.4997 | 48500 | 0.0232 | - | - | - |
| 2.5255 | 49000 | 0.022 | - | - | - |
| 2.5513 | 49500 | 0.0227 | - | - | - |
| 2.5771 | 50000 | 0.0226 | - | - | - |
| 2.6028 | 50500 | 0.0233 | - | - | - |
| 2.6286 | 51000 | 0.0224 | - | - | - |
| 2.6544 | 51500 | 0.0224 | - | - | - |
| 2.6801 | 52000 | 0.0224 | - | - | - |
| 2.7059 | 52500 | 0.022 | - | - | - |
| 2.7317 | 53000 | 0.0223 | - | - | - |
| 2.7574 | 53500 | 0.023 | - | - | - |
| 2.7832 | 54000 | 0.023 | - | - | - |
| 2.8090 | 54500 | 0.023 | - | - | - |
| 2.8348 | 55000 | 0.0225 | - | - | - |
| 2.8605 | 55500 | 0.0229 | - | - | - |
| 2.8863 | 56000 | 0.0229 | - | - | - |
| 2.9121 | 56500 | 0.0224 | - | - | - |
| 2.9378 | 57000 | 0.0218 | - | - | - |
| 2.9636 | 57500 | 0.0226 | - | - | - |
| 2.9894 | 58000 | 0.0229 | - | - | - |
| 3.0 | 58206 | - | 0.0231 | 0.9978 | - |
| -1 | -1 | - | - | - | 0.9979 |
</details>
### Framework Versions
- Python: 3.12.0
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.7.1+cu128
- Accelerate: 1.8.1
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Mostefa-Terbeche/diabetic-retinopathy-eyepacs-resnet50-gentle-20250619-172901
|
Mostefa-Terbeche
| 2025-08-20T07:40:38Z | 0 | 0 | null |
[
"diabetic-retinopathy",
"medical-imaging",
"pytorch",
"computer-vision",
"retinal-imaging",
"dataset:eyepacs",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-08-20T06:49:07Z |
---
license: apache-2.0
tags:
- diabetic-retinopathy
- medical-imaging
- pytorch
- computer-vision
- retinal-imaging
datasets:
- eyepacs
metrics:
- accuracy
- quadratic-kappa
- auc
model-index:
- name: eyepacs_resnet50_gentle
results:
- task:
type: image-classification
name: Diabetic Retinopathy Classification
dataset:
type: eyepacs
name: EYEPACS
metrics:
- type: accuracy
value: 0.13265015656134357
- type: quadratic-kappa
value: 0.40586976179692624
---
# Diabetic Retinopathy Classification Model
## Model Description
This model is trained for diabetic retinopathy classification using the resnet50 architecture on the eyepacs dataset with gentle preprocessing.
## Model Details
- **Architecture**: resnet50
- **Dataset**: eyepacs
- **Preprocessing**: gentle
- **Training Date**: 20250619-172901
- **Task**: 5-class diabetic retinopathy grading (0-4)
- **Directory**: eyepacs_resnet50_20250619-172901_new
## Performance
- **Test Accuracy**: 0.13265015656134357
- **Test Quadratic Kappa**: 0.40586976179692624
- **Validation Kappa**: 0.40586976179692624
## Usage
```python
import torch
from huggingface_hub import hf_hub_download
# Download model
model_path = hf_hub_download(
repo_id="your-username/diabetic-retinopathy-eyepacs-resnet50-gentle",
filename="model_best.pt"
)
# Load model
model = torch.load(model_path, map_location='cpu')
```
## Classes
- 0: No DR (No diabetic retinopathy)
- 1: Mild DR (Mild non-proliferative diabetic retinopathy)
- 2: Moderate DR (Moderate non-proliferative diabetic retinopathy)
- 3: Severe DR (Severe non-proliferative diabetic retinopathy)
- 4: Proliferative DR (Proliferative diabetic retinopathy)
## Citation
If you use this model, please cite your research paper/thesis.
|
Noredine67/mon-nouveau-redacteur-EE
|
Noredine67
| 2025-08-20T07:39:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T07:39:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ElToro2602/blockassist-bc-raging_prehistoric_chameleon_1755675427
|
ElToro2602
| 2025-08-20T07:38:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging prehistoric chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:37:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging prehistoric chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Adun/openai-gpt-oss-20b-thaifood
|
Adun
| 2025-08-20T07:37:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gpt_oss",
"trl",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T07:37:15Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Adun
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755674230
|
Sayemahsjn
| 2025-08-20T07:36:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:36:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
arianaazarbal/standard_tpr_0.65_test-20250820_070706-policy-adapter
|
arianaazarbal
| 2025-08-20T07:35:40Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-08-20T07:34:41Z |
# Policy Model LoRA Adapter (GRPO/DPO)
Experiment: standard_tpr_0.65_test
Timestamp: 20250820_070706
This model was trained as part of the deception-evasion-honesty experiments.
## Model Details
- **Type**: Policy Model LoRA Adapter (GRPO/DPO)
- **Experiment Name**: standard_tpr_0.65_test
- **Training Timestamp**: 20250820_070706
|
nema122/blockassist-bc-robust_fluffy_ram_1755675206
|
nema122
| 2025-08-20T07:35:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"robust fluffy ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:34:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- robust fluffy ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
arianaazarbal/standard_tpr_0.65_test-20250820_070706-rm-adapter
|
arianaazarbal
| 2025-08-20T07:34:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T07:34:08Z |
# Reward Model LoRA Adapter
Experiment: standard_tpr_0.65_test
Timestamp: 20250820_070706
This model was trained as part of the deception-evasion-honesty experiments.
## Model Details
- **Type**: Reward Model LoRA Adapter
- **Experiment Name**: standard_tpr_0.65_test
- **Training Timestamp**: 20250820_070706
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755673744
|
sampingkaca72
| 2025-08-20T07:34:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:34:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755673415
|
coelacanthxyz
| 2025-08-20T07:33:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:33:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hobson123/blockassist-bc-mammalian_dense_gibbon_1755674761
|
hobson123
| 2025-08-20T07:32:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mammalian dense gibbon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:31:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mammalian dense gibbon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chainway9/blockassist-bc-untamed_quick_eel_1755673482
|
chainway9
| 2025-08-20T07:29:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:29:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755674870
|
yaelahnal
| 2025-08-20T07:29:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:28:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eniffA/Affine-Look-Mum-I-Made-It-On-The-Internet
|
eniffA
| 2025-08-20T07:28:51Z | 232 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"vllm",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"mxfp4",
"region:us"
] |
text-generation
| 2025-08-11T14:25:08Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- vllm
---
<p align="center">
<img alt="gpt-oss-20b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-20b.svg">
</p>
<p align="center">
<a href="https://gpt-oss.com"><strong>Try gpt-oss</strong></a> ·
<a href="https://cookbook.openai.com/topic/gpt-oss"><strong>Guides</strong></a> ·
<a href="https://openai.com/index/gpt-oss-model-card"><strong>Model card</strong></a> ·
<a href="https://openai.com/index/introducing-gpt-oss/"><strong>OpenAI blog</strong></a>
</p>
<br>
Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases.
We’re releasing two flavors of these open models:
- `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters)
- `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters)
Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise.
> [!NOTE]
> This model card is dedicated to the smaller `gpt-oss-20b` model. Check out [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) for the larger model.
# Highlights
* **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment.
* **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs.
* **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users.
* **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning.
* **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs.
* **Native MXFP4 quantization:** The models are trained with native MXFP4 precision for the MoE layer, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory.
---
# Inference examples
## Transformers
You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package.
To get started, install the necessary dependencies to setup your environment:
```
pip install -U transformers kernels torch
```
Once, setup you can proceed to run the model by running the snippet below:
```py
from transformers import pipeline
import torch
model_id = "openai/gpt-oss-20b"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver:
```
transformers serve
transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-20b
```
[Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers)
## vLLM
vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server.
```bash
uv pip install --pre vllm==0.10.1+gptoss \
--extra-index-url https://wheels.vllm.ai/gpt-oss/ \
--extra-index-url https://download.pytorch.org/whl/nightly/cu128 \
--index-strategy unsafe-best-match
vllm serve openai/gpt-oss-20b
```
[Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm)
## PyTorch / Triton
To learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation).
## Ollama
If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download).
```bash
# gpt-oss-20b
ollama pull gpt-oss:20b
ollama run gpt-oss:20b
```
[Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama)
#### LM Studio
If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download.
```bash
# gpt-oss-20b
lms get openai/gpt-oss-20b
```
Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners.
---
# Download the model
You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI:
```shell
# gpt-oss-20b
huggingface-cli download openai/gpt-oss-20b --include "original/*" --local-dir gpt-oss-20b/
pip install gpt-oss
python -m gpt_oss.chat model/
```
# Reasoning levels
You can adjust the reasoning level that suits your task across three levels:
* **Low:** Fast responses for general dialogue.
* **Medium:** Balanced speed and detail.
* **High:** Deep and detailed analysis.
The reasoning level can be set in the system prompts, e.g., "Reasoning: high".
# Tool use
The gpt-oss models are excellent for:
* Web browsing (using built-in browsing tools)
* Function calling with defined schemas
* Agentic operations like browser tasks
# Fine-tuning
Both gpt-oss models can be fine-tuned for a variety of specialized use cases.
This smaller model `gpt-oss-20b` can be fine-tuned on consumer hardware, whereas the larger [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) can be fine-tuned on a single H100 node.
|
JonusNattapong/thai-bpe-tokenizer
|
JonusNattapong
| 2025-08-20T07:24:56Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T07:24:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755674429
|
yaelahnal
| 2025-08-20T07:21:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:21:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755672877
|
mang3dd
| 2025-08-20T07:21:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:21:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
launchpd3/blockassist-bc-polished_foxy_stingray_1755674367
|
launchpd3
| 2025-08-20T07:21:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"polished foxy stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:21:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- polished foxy stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
milliarderdol/blockassist-bc-roaring_rough_scorpion_1755672459
|
milliarderdol
| 2025-08-20T07:20:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring rough scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:19:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring rough scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vayishu/visa-minilm
|
vayishu
| 2025-08-20T07:20:26Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:1000",
"loss:TripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-20T07:04:31Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:1000
- loss:TripletLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
- source_sentence: What are the key points in passage fam_402.10_30?
sentences:
- ( v ) A dependent applying under [ paragraph ( s)(2 ) ( iii)](/current / title-8
/ section-214.2#p-214.2(s)(2)(iii ) ) or [ ( iv)](/current / title-8 / section-214.2#p-214.2(s)(2)(iv
) ) of this section must also submit a certified statement from the post - secondary
educational institution confirming that he or she is pursuing studies on a full
- time basis .
- "( b ) ( U ) The criteria for \n qualifying as an H-1B physician are found in\
\ subparagraph 3 below ."
- ( ii ) * What are the requirements for participation ? *
- source_sentence: What are the key points in passage 8cfr_214.3_93?
sentences:
- ( vii ) Whether the student has been certified for practical training , and the
beginning and end dates of certification .
- ( D ) Similarity of jobs and working conditions ;
- ( ii ) * What are the requirements for participation ? *
- source_sentence: Explain the significance of passage fam_402_62.
sentences:
- ( * i * ) Has competency in oral and written English which shall be demonstrated
by the passage of the English language proficiency test given by the Educational
Commission for Foreign Medical Graduates ; or
- "Derivative beneficiaries are entitled to apply for visas to \n follow and/or\
\ join principals who are maintaining status in the United States , \n even when\
\ the principal was never issued a visa in the classification being \n sought\
\ by the dependent . Take , for instance , a world - class soccer player , who\
\ \n changes their status from F-1 to O-1 . The spouse and/or children are entitled\
\ \n to apply for nonimmigrant O-3 visas . Typical documentation for establishing\
\ \n entitlement to visas in such an instance might include marriage and birth\
\ \n certificates for the spouse and dependent(s ) , a copy of the principal \n\
\ beneficiary 's approval notice , and any Form I-797 , Notice of Action notices\
\ \n relating to the dependents ' own change of status filings . Another example\
\ \n would be a foreign national who entered the United States on a B-1 visa and\
\ \n subsequently changed status to F-1 . The spouse and/or child of the F-1\
\ would \n be entitled to seek F-2 visas . In such cases , the dependent would\
\ need to \n present a properly endorsed Form I-20 , Certificate of Eligibility\
\ for \n Nonimmigrant ( F-1 ) Student Status - for Academic and Language Students\
\ , as \n evidence that the principal is enrolled , or will be enrolled within\
\ 60 days , in \n a full course of study or is in approved practical training\
\ ."
- ( 1 ) Meaning of term * Designated Official . * As used in [ § § 214.2(f)](/current
/ title-8 / section-214.2#p-214.2(f ) ) and [ ( m)](/current / title-8 / section-214.2#p-214.2(m
) ) , [ 214.3](/current / title-8 / section-214.3 ) and [ 214.4](/current / title-8
/ section-214.4 ) , a * Designated Official , Designated School Official ( DSO
) , * or * Principal Designated School Official ( PDSO ) , * means a regularly
employed member of the school administration whose office is located at the school
and whose compensation does not come from commissions for recruitment of foreign
students . An individual whose principal obligation to the school is to recruit
foreign students for compensation does not qualify as a designated official .
The PDSO and any other DSO must be named by the president , owner , or head of
a school or school system . The PDSO and DSO may not delegate this designation
to any other person .
- source_sentence: What is the main topic of passage fam_402.9_141?
sentences:
- "( 1 ) The title of the position to which the applicant \n is destined , its\
\ place in the firm 's organizational structure , the duties \n of the position\
\ , the degree to which the applicant will have ultimate control \n and responsibility\
\ for the firm 's overall operations or a major component \n thereof , the number\
\ and skill levels of the employees the applicant will \n supervise , the level\
\ of pay , and whether the applicant possesses qualifying \n executive or supervisory\
\ experience ;"
- describes methods of oversight and supervision . The Form I-983 must explain how
the training is directly related to the student 's qualifying STEM degree .
- ( A ) A nurse who is granted H-1C classification based on passage of the CGFNS
examination must , upon admission to the United States , be able to obtain temporary
licensure or other temporary authorization to practice as a registered nurse from
the State Board of Nursing in the state of intended employment .
- source_sentence: Explain the significance of passage uscis_pm_volume_2_part_f_chapter_7_1.
sentences:
- ( C ) A common formal code of doctrine and discipline ;
- ( * i * ) Has competency in oral and written English which shall be demonstrated
by the passage of the English language proficiency test given by the Educational
Commission for Foreign Medical Graduates ; or
- Chapter 7 - Absences From the United States | USCIS
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'})
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("vayishu/visa-minilm")
# Run inference
sentences = [
'Explain the significance of passage uscis_pm_volume_2_part_f_chapter_7_1.',
'Chapter 7 - Absences From the United States | USCIS',
'( * i * ) Has competency in oral and written English which shall be demonstrated by the passage of the English language proficiency test given by the Educational Commission for Foreign Medical Graduates ; or',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.3954, 0.4014],
# [0.3954, 1.0000, 0.1409],
# [0.4014, 0.1409, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 1,000 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | sentence_2 |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 20.62 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 74.9 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 48.16 tokens</li><li>max: 143 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 | sentence_2 |
|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Explain the significance of passage 8cfr_214.1_85.</code> | <code># # # # § 214.1 Requirements for admission , extension , and maintenance of status .</code> | <code>( * 5 * ) Evidence of the alien 's original scientific , scholarly , or business - related contributions of major significance in the field ;</code> |
| <code>Can you summarize the content of passage 8cfr_214.2_1843?</code> | <code>( C ) A common formal code of doctrine and discipline ;</code> | <code>The Office of the Federal Register publishes documents on behalf of Federal agencies but does not have any authority over their programs . We recommend you directly contact the agency associated with the content in question .</code> |
| <code>What is the main topic of passage uscis_pm_volume_2_part_f_chapter_5_85?</code> | <code>If the [ Form I-765](/i-765 ) for the STEM OPT extension is denied and the student 's post - completion OPT EAD is expired , OPT employment authorization ends on the date of the decision and the student 's F-1 status ends 60 days after the date of denial . If the Form I-765 for the STEM OPT extension is denied and the student 's post - completion OPT EAD is unexpired , the student will remain employment authorized until the expiration date of the EAD .</code> | <code>( A ) A nurse who is granted H-1C classification based on passage of the CGFNS examination must , upon admission to the United States , be able to obtain temporary licensure or other temporary authorization to practice as a registered nurse from the State Board of Nursing in the state of intended employment .</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.0
- Transformers: 4.55.2
- PyTorch: 2.8.0+cu128
- Accelerate: 1.10.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Tn1072/my_awesome_video_cls_model
|
Tn1072
| 2025-08-20T07:20:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2025-08-20T07:19:51Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_video_cls_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_video_cls_model
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1236
- Accuracy: 0.5571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 300
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0641 | 1.0 | 300 | 1.1236 | 0.5571 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
nema122/blockassist-bc-robust_fluffy_ram_1755674200
|
nema122
| 2025-08-20T07:18:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"robust fluffy ram",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:18:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- robust fluffy ram
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755672736
|
lisaozill03
| 2025-08-20T07:18:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:18:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gaianet/Qwen3-Coder-30B-A3B-Instruct-GGUF
|
gaianet
| 2025-08-20T07:17:35Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen3_moe",
"text-generation",
"base_model:Qwen/Qwen3-Coder-30B-A3B-Instruct",
"base_model:quantized:Qwen/Qwen3-Coder-30B-A3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-20T02:43:35Z |
---
base_model: Qwen/Qwen3-Coder-30B-A3B-Instruct
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct/blob/main/LICENSE
model_creator: Qwen
model_name: Qwen3-Coder-30B-A3B-Instruct
quantized_by: Second State Inc.
pipeline_tag: text-generation
library_name: transformers
---
# Qwen3-Coder-30B-A3B-Instruct-GGUF
## Original Model
[Qwen/Qwen3-Coder-30B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct)
## Run with Gaianet
**Prompt template**
prompt template:
- `chatml`
**Context size**
chat_ctx_size: `256000`
**Run with GaiaNet**
- Quick start: https://docs.gaianet.ai/node-guide/quick-start
- Customize your node: https://docs.gaianet.ai/node-guide/customize
*Quantized with llama.cpp b6031*
|
Kokoutou/soundsright_dn_2008_2
|
Kokoutou
| 2025-08-20T07:16:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T07:02:41Z |
# Container Template for SoundsRight Subnet Miners
This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/SoundsRightSubnet). The branches `DENOISING_16000HZ` and `DEREVERBERATION_16000HZ` contain SGMSE fitted with the approrpriate checkpoints for denoising and dereverberation tasks at 16kHz, respectively.
This container has only been tested with **Ubuntu 24.04** and **CUDA 12.6**. It may run on other configurations, but it is not guaranteed.
To run the container, first configure NVIDIA Container Toolkit and generate a CDI specification. Follow the instructions to download the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with Apt.
Next, follow the instructions for [generating a CDI specification](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html).
Verify that the CDI specification was done correctly with:
```
$ nvidia-ctk cdi list
```
You should see this in your output:
```
nvidia.com/gpu=all
nvidia.com/gpu=0
```
If you are running podman as root, run the following command to start the container:
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --user root --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
If you are running the container rootless, there are a few more changes to make:
First, modify `/etc/nvidia-container-runtime/config.toml` and set the following parameters:
```
[nvidia-container-cli]
no-cgroups = true
[nvidia-container-runtime]
debug = "/tmp/nvidia-container-runtime.log"
```
You can also run the following command to achieve the same result:
```
$ sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place
```
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --volume /usr/local/cuda-12.6:/usr/local/cuda-12.6 --user 10002:10002 --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
Running the container will spin up an API with the following endpoints:
1. `/status/` : Communicates API status
2. `/prepare/` : Download model checkpoint and initialize model
3. `/upload-audio/` : Upload audio files, save to noisy audio directory
4. `/enhance/` : Initialize model, enhance audio files, save to enhanced audio directory
5. `/download-enhanced/` : Download enhanced audio files
By default the API will use host `0.0.0.0` and port `6500`.
### References
1. **Welker, Simon; Richter, Julius; Gerkmann, Timo**
*Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain*.
Proceedings of *Interspeech 2022*, 2022, pp. 2928–2932.
[DOI: 10.21437/Interspeech.2022-10653](https://doi.org/10.21437/Interspeech.2022-10653)
2. **Richter, Julius; Welker, Simon; Lemercier, Jean-Marie; Lay, Bunlong; Gerkmann, Timo**
*Speech Enhancement and Dereverberation with Diffusion-based Generative Models*.
*IEEE/ACM Transactions on Audio, Speech, and Language Processing*, Vol. 31, 2023, pp. 2351–2364.
[DOI: 10.1109/TASLP.2023.3285241](https://doi.org/10.1109/TASLP.2023.3285241)
3. **Richter, Julius; Wu, Yi-Chiao; Krenn, Steven; Welker, Simon; Lay, Bunlong; Watanabe, Shinjii; Richard, Alexander; Gerkmann, Timo**
*EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation*.
Proceedings of *ISCA Interspeech*, 2024, pp. 4873–4877.
|
Kokoutou/soundsright_dn_2008_1
|
Kokoutou
| 2025-08-20T07:16:35Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T07:02:41Z |
# Container Template for SoundsRight Subnet Miners
This repository contains a contanierized version of [SGMSE+](https://huggingface.co/sp-uhh/speech-enhancement-sgmse) and serves as a tutorial for miners to format their models on [Bittensor's](https://bittensor.com/) [SoundsRight Subnet](https://github.com/synapsec-ai/SoundsRightSubnet). The branches `DENOISING_16000HZ` and `DEREVERBERATION_16000HZ` contain SGMSE fitted with the approrpriate checkpoints for denoising and dereverberation tasks at 16kHz, respectively.
This container has only been tested with **Ubuntu 24.04** and **CUDA 12.6**. It may run on other configurations, but it is not guaranteed.
To run the container, first configure NVIDIA Container Toolkit and generate a CDI specification. Follow the instructions to download the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) with Apt.
Next, follow the instructions for [generating a CDI specification](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html).
Verify that the CDI specification was done correctly with:
```
$ nvidia-ctk cdi list
```
You should see this in your output:
```
nvidia.com/gpu=all
nvidia.com/gpu=0
```
If you are running podman as root, run the following command to start the container:
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --user root --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
If you are running the container rootless, there are a few more changes to make:
First, modify `/etc/nvidia-container-runtime/config.toml` and set the following parameters:
```
[nvidia-container-cli]
no-cgroups = true
[nvidia-container-runtime]
debug = "/tmp/nvidia-container-runtime.log"
```
You can also run the following command to achieve the same result:
```
$ sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place
```
Run the container with:
```
podman build -t modelapi . && podman run -d --device nvidia.com/gpu=all --volume /usr/local/cuda-12.6:/usr/local/cuda-12.6 --user 10002:10002 --name modelapi -p 6500:6500 modelapi
```
Access logs with:
```
podman logs -f modelapi
```
Running the container will spin up an API with the following endpoints:
1. `/status/` : Communicates API status
2. `/prepare/` : Download model checkpoint and initialize model
3. `/upload-audio/` : Upload audio files, save to noisy audio directory
4. `/enhance/` : Initialize model, enhance audio files, save to enhanced audio directory
5. `/download-enhanced/` : Download enhanced audio files
By default the API will use host `0.0.0.0` and port `6500`.
### References
1. **Welker, Simon; Richter, Julius; Gerkmann, Timo**
*Speech Enhancement with Score-Based Generative Models in the Complex STFT Domain*.
Proceedings of *Interspeech 2022*, 2022, pp. 2928–2932.
[DOI: 10.21437/Interspeech.2022-10653](https://doi.org/10.21437/Interspeech.2022-10653)
2. **Richter, Julius; Welker, Simon; Lemercier, Jean-Marie; Lay, Bunlong; Gerkmann, Timo**
*Speech Enhancement and Dereverberation with Diffusion-based Generative Models*.
*IEEE/ACM Transactions on Audio, Speech, and Language Processing*, Vol. 31, 2023, pp. 2351–2364.
[DOI: 10.1109/TASLP.2023.3285241](https://doi.org/10.1109/TASLP.2023.3285241)
3. **Richter, Julius; Wu, Yi-Chiao; Krenn, Steven; Welker, Simon; Lay, Bunlong; Watanabe, Shinjii; Richard, Alexander; Gerkmann, Timo**
*EARS: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation*.
Proceedings of *ISCA Interspeech*, 2024, pp. 4873–4877.
|
rettertop/blockassist-bc-mimic_peckish_cockroach_1755674177
|
rettertop
| 2025-08-20T07:16:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mimic peckish cockroach",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:16:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mimic peckish cockroach
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rettertop/blockassist-bc-rangy_mighty_hare_1755674136
|
rettertop
| 2025-08-20T07:15:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rangy mighty hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:15:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rangy mighty hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ElToro2602/blockassist-bc-raging_prehistoric_chameleon_1755674030
|
ElToro2602
| 2025-08-20T07:14:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging prehistoric chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T07:14:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging prehistoric chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.