modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 06:30:45
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 06:30:39
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
aleebaster/blockassist-bc-sly_eager_boar_1756719763
|
aleebaster
| 2025-09-01T10:35:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T10:35:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
GroomerG/blockassist-bc-vicious_pawing_badger_1756721295
|
GroomerG
| 2025-09-01T10:34:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious pawing badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T10:34:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious pawing badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pepijn223/rlearn_siglip8
|
pepijn223
| 2025-09-01T10:33:51Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"rlearn",
"robotics",
"dataset:pepijn223/phone_pipeline_pickup1",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-01T10:33:42Z |
---
datasets: pepijn223/phone_pipeline_pickup1
library_name: lerobot
license: apache-2.0
model_name: rlearn
pipeline_tag: robotics
tags:
- lerobot
- rlearn
- robotics
---
# Model Card for rlearn
<!-- Provide a quick summary of what the model is/does. -->
_Model type not recognized — please update this template._
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1756721222
|
kojeklollipop
| 2025-09-01T10:33:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T10:33:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
noman007/FastVLM05B
|
noman007
| 2025-09-01T10:27:30Z | 0 | 0 |
ml-fastvlm
|
[
"ml-fastvlm",
"safetensors",
"llava_qwen2",
"text-generation",
"transformers",
"conversational",
"custom_code",
"arxiv:2412.13303",
"license:apple-amlr",
"region:us"
] |
text-generation
| 2025-09-01T10:25:48Z |
---
license: apple-amlr
license_name: apple-ascl
license_link: https://github.com/apple/ml-fastvlm/blob/main/LICENSE_MODEL
library_name: ml-fastvlm
tags:
- transformers
---
# FastVLM: Efficient Vision Encoding for Vision Language Models
FastVLM was introduced in
**[FastVLM: Efficient Vision Encoding for Vision Language Models](https://www.arxiv.org/abs/2412.13303). (CVPR 2025)**
[//]: # ()
<p align="center">
<img src="acc_vs_latency_qwen-2.png" alt="Accuracy vs latency figure." width="400"/>
</p>
### Highlights
* We introduce FastViTHD, a novel hybrid vision encoder designed to output fewer tokens and significantly reduce encoding time for high-resolution images.
* Our smallest variant outperforms LLaVA-OneVision-0.5B with 85x faster Time-to-First-Token (TTFT) and 3.4x smaller vision encoder.
* Our larger variants using Qwen2-7B LLM outperform recent works like Cambrian-1-8B while using a single image encoder with a 7.9x faster TTFT.
### Evaluations
| Benchmark | FastVLM-0.5B | FastVLM-1.5B | FastVLM-7B |
|:--------------|:------------:|:------------:|:----------:|
| Ai2D | 68.0 | 77.4 | 83.6 |
| ScienceQA | 85.2 | 94.4 | 96.7 |
| MMMU | 33.9 | 37.8 | 45.4 |
| VQAv2 | 76.3 | 79.1 | 80.8 |
| ChartQA | 76.0 | 80.1 | 85.0 |
| TextVQA | 64.5 | 70.4 | 74.9 |
| InfoVQA | 46.4 | 59.7 | 75.8 |
| DocVQA | 82.5 | 88.3 | 93.2 |
| OCRBench | 63.9 | 70.2 | 73.1 |
| RealWorldQA | 56.1 | 61.2 | 67.2 |
| SeedBench-Img | 71.0 | 74.2 | 75.4 |
### Usage Example
To run inference of PyTorch checkpoint, follow the instruction in the official repo:
Download the model
```
huggingface-cli download apple/FastVLM-0.5B
```
Run inference using `predict.py` from the official repo.
```bash
python predict.py --model-path /path/to/checkpoint-dir \
--image-file /path/to/image.png \
--prompt "Describe the image."
```
### Run inference with Transformers (Remote Code)
To run inference with transformers we can leverage `trust_remote_code` along with the following snippet:
```python
import torch
from PIL import Image
from transformers import AutoTokenizer, AutoModelForCausalLM
MID = "apple/FastVLM-0.5B"
IMAGE_TOKEN_INDEX = -200 # what the model code looks for
# Load
tok = AutoTokenizer.from_pretrained(MID, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
MID,
torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
device_map="auto",
trust_remote_code=True,
)
# Build chat -> render to string (not tokens) so we can place <image> exactly
messages = [
{"role": "user", "content": "<image>\nDescribe this image in detail."}
]
rendered = tok.apply_chat_template(
messages, add_generation_prompt=True, tokenize=False
)
pre, post = rendered.split("<image>", 1)
# Tokenize the text *around* the image token (no extra specials!)
pre_ids = tok(pre, return_tensors="pt", add_special_tokens=False).input_ids
post_ids = tok(post, return_tensors="pt", add_special_tokens=False).input_ids
# Splice in the IMAGE token id (-200) at the placeholder position
img_tok = torch.tensor([[IMAGE_TOKEN_INDEX]], dtype=pre_ids.dtype)
input_ids = torch.cat([pre_ids, img_tok, post_ids], dim=1).to(model.device)
attention_mask = torch.ones_like(input_ids, device=model.device)
# Preprocess image via the model's own processor
img = Image.open("test-2.jpg").convert("RGB")
px = model.get_vision_tower().image_processor(images=img, return_tensors="pt")["pixel_values"]
px = px.to(model.device, dtype=model.dtype)
# Generate
with torch.no_grad():
out = model.generate(
inputs=input_ids,
attention_mask=attention_mask,
images=px,
max_new_tokens=128,
)
print(tok.decode(out[0], skip_special_tokens=True))
```
## Citation
If you found this model useful, please cite the following paper:
```
@InProceedings{fastvlm2025,
author = {Pavan Kumar Anasosalu Vasu, Fartash Faghri, Chun-Liang Li, Cem Koc, Nate True, Albert Antony, Gokul Santhanam, James Gabriel, Peter Grasch, Oncel Tuzel, Hadi Pouransari},
title = {FastVLM: Efficient Vision Encoding for Vision Language Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2025},
}
```
|
arif696/blockassist-bc-regal_spotted_pelican_1756722039
|
arif696
| 2025-09-01T10:22:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal spotted pelican",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T10:22:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal spotted pelican
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mainwalletbd/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pudgy_jagged_ape
|
mainwalletbd
| 2025-09-01T10:20:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am pudgy_jagged_ape",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-01T07:34:33Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am pudgy_jagged_ape
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
matherchodhuuu/blockassist-bc-lightfooted_skilled_chameleon_1756721932
|
matherchodhuuu
| 2025-09-01T10:20:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted skilled chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T10:20:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted skilled chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
georgeNakayama/AIpparel
|
georgeNakayama
| 2025-09-01T10:18:44Z | 0 | 5 | null |
[
"en",
"base_model:llava-hf/llava-1.5-7b-hf",
"base_model:finetune:llava-hf/llava-1.5-7b-hf",
"license:mit",
"region:us"
] | null | 2025-03-21T13:17:06Z |
---
license: mit
language:
- en
base_model:
- llava-hf/llava-1.5-7b-hf
---
Welcome to the official storage cite for <strong>AIpparel: A Multimodal Foundation Model for Digital Garments (CVPR 2025 Highlight)</strong>.
This repository stores the data and the model weights of AIpparel.
### For the project website, visit [here](https://georgenakayama.github.io/AIpparel/).
### For the official code repository, visit [here](https://github.com/georgeNakayama/AIpparel-Code).
## GarmentCodeData-Multimodal (GCD-MM) Dataset.
Download <strong>gcd_mm_captions.zip</strong> and <strong>gcd_mm_editing.zip</strong> to obtain the GCD-MM annotations.
We also provide a compactified version in <strong>gcd-mm.parquet</strong>.
Specifically, the file contains the following columns representing different annotations for GCD:
- <strong>idx:</strong> unique id of the sewing pattern, used to associate the annotations with sewing patterns in GCD.
- <strong>descriptive_text:</strong> caption that describe the sewing pattern in detail.
- <strong>speculative_text:</strong> caption that speculates a situation suitable for the sewing pattern.
- <strong>keywords:</strong> short, keywords describing the design of sewing pattern.
- <strong>title:</strong> title of the sewing pattern.
- <strong>original_pattern:</strong> the sewing pattern JSON file.
- <strong>edited_pattern:</strong> the edited sewing pattern JSON file.
- <strong>edit_instruction:</strong> the editing instruction that converts the original pattern to the edited pattern.
## AIpparel Pre-trained Model Weights
Download <strong>aipparel_pretrained.pth</strong> to obtain the pretrained weights to AIpparel. Note that this checkpoint <strong>do not</strong> contain weights for the vision encoder and mm-projector of LlAVA 1.5. Please download them separately.
|
kevinshin/qwen3-1.7b-base-rft-lr-1e-5-batch-16-epoch-1-wildchat-cw-3k
|
kevinshin
| 2025-09-01T10:17:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"alignment-handbook",
"conversational",
"dataset:kevinshin/wildchat-creative-writing-3k-rft",
"base_model:Qwen/Qwen3-1.7B-Base",
"base_model:finetune:Qwen/Qwen3-1.7B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-01T08:01:57Z |
---
base_model: Qwen/Qwen3-1.7B-Base
datasets: kevinshin/wildchat-creative-writing-3k-rft
library_name: transformers
model_name: qwen3-1.7b-base-rft-lr-1e-5-batch-16-epoch-1-wildchat-cw-3k
tags:
- generated_from_trainer
- trl
- sft
- alignment-handbook
licence: license
---
# Model Card for qwen3-1.7b-base-rft-lr-1e-5-batch-16-epoch-1-wildchat-cw-3k
This model is a fine-tuned version of [Qwen/Qwen3-1.7B-Base](https://huggingface.co/Qwen/Qwen3-1.7B-Base) on the [kevinshin/wildchat-creative-writing-3k-rft](https://huggingface.co/datasets/kevinshin/wildchat-creative-writing-3k-rft) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kevinshin/qwen3-1.7b-base-rft-lr-1e-5-batch-16-epoch-1-wildchat-cw-3k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/myungjune-sogang-university/general_remo_train/runs/mig8f8td)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.55.0.dev0
- Pytorch: 2.6.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
gopterwegop/blockassist-bc-downy_tawny_hippo_1756721793
|
gopterwegop
| 2025-09-01T10:17:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"downy tawny hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T10:16:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- downy tawny hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
popouy/blockassist-bc-roaring_flapping_python_1756721706
|
popouy
| 2025-09-01T10:15:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring flapping python",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T10:15:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring flapping python
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cookienter/lifechart-roberta-classifier-hptuning
|
cookienter
| 2025-09-01T10:15:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-01T08:57:33Z |
---
library_name: transformers
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
model-index:
- name: lifechart-roberta-classifier-hptuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lifechart-roberta-classifier-hptuning
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0396
- Macro F1: 0.7860
- Precision: 0.7798
- Recall: 0.8011
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.4008944900977562e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.11964916087795532
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|
| 1.789 | 1.0 | 1641 | 0.8760 | 0.7307 | 0.6981 | 0.7851 |
| 0.7511 | 2.0 | 3282 | 0.8702 | 0.7615 | 0.7434 | 0.7883 |
| 0.4898 | 3.0 | 4923 | 0.9587 | 0.7794 | 0.7711 | 0.8008 |
| 0.3141 | 4.0 | 6564 | 1.0396 | 0.7860 | 0.7798 | 0.8011 |
### Framework versions
- Transformers 4.55.4
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4
|
arif696/blockassist-bc-regal_spotted_pelican_1756721525
|
arif696
| 2025-09-01T10:13:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal spotted pelican",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T10:13:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal spotted pelican
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gopterwegop/blockassist-bc-hulking_singing_dolphin_1756721532
|
gopterwegop
| 2025-09-01T10:12:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hulking singing dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T10:12:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hulking singing dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756721401
|
omerbektass
| 2025-09-01T10:10:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T10:10:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
matherchodhuuu/blockassist-bc-lightfooted_skilled_chameleon_1756721241
|
matherchodhuuu
| 2025-09-01T10:08:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lightfooted skilled chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T10:08:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lightfooted skilled chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
faisu-eth/blockassist-bc-thick_twitchy_jackal_1756721044
|
faisu-eth
| 2025-09-01T10:04:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thick twitchy jackal",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T10:04:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thick twitchy jackal
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF
|
mradermacher
| 2025-09-01T10:03:15Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"kto",
"en",
"base_model:AmberYifan/Llama-3.1-8B-sft-spin-10k-KTO",
"base_model:quantized:AmberYifan/Llama-3.1-8B-sft-spin-10k-KTO",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-01T05:56:43Z |
---
base_model: AmberYifan/Llama-3.1-8B-sft-spin-10k-KTO
language:
- en
library_name: transformers
model_name: Llama-3.1-8B-sft-spin-10k-KTO
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- kto
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-spin-10k-KTO
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-3.1-8B-sft-spin-10k-KTO-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-sft-spin-10k-KTO-GGUF/resolve/main/Llama-3.1-8B-sft-spin-10k-KTO.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756720852
|
omerbkts
| 2025-09-01T10:01:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T10:01:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
teysty/vjepa2-vitl-fpc16-256-ssv2-fdet_64-frames-1clip-1indice_newsplit
|
teysty
| 2025-09-01T10:00:49Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vjepa2",
"video-classification",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2025-09-01T09:59:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1756719232
|
vwzyrraz7l
| 2025-09-01T09:59:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:59:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756720585
|
Ferdi3425
| 2025-09-01T09:57:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:57:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kznmp3/blockassist-bc-lively_raging_hippo_1756720531
|
kznmp3
| 2025-09-01T09:57:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lively raging hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:56:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lively raging hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sekirr/blockassist-bc-masked_tenacious_whale_1756720573
|
sekirr
| 2025-09-01T09:56:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:56:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
popouy/blockassist-bc-yawning_scavenging_gull_1756720442
|
popouy
| 2025-09-01T09:54:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning scavenging gull",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:54:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning scavenging gull
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
coastalcph/Qwen2.5-1.5B-1t_gcd_sycophanct-6t_diff_pv_sycophant
|
coastalcph
| 2025-09-01T09:50:26Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-09-01T09:49:33Z |
# Combined Task Vector Model
This model was created by combining task vectors from multiple fine-tuned models.
## Task Vector Computation
```python
t_1 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy")
t_2 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05")
t_combined = 1.0 * t_1 + 6.0 * t_2 - 6.0 * t_3
new_model = t_combined.apply_to("Qwen/Qwen2.5-1.5B-Instruct", scaling_coef=1.0)
```
Models Used
- Base Model: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct
- Fine-tuned Model 1: https://huggingface.co/coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy
- Fine-tuned Model 2: https://huggingface.co/coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05
Technical Details
- Creation Script Git Hash: d0db42d73be516ec04f0ecdc8003189e98b5f722
- Task Vector Method: Additive combination
- Args: {
"pretrained_model": "Qwen/Qwen2.5-1.5B-Instruct",
"finetuned_model1": "coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy",
"finetuned_model2": "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05",
"finetuned_model3": "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-sycophantic_1e-05",
"output_model_name": "coastalcph/Qwen2.5-1.5B-1t_gcd_sycophanct-6t_diff_pv_sycophant",
"output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug",
"scaling_coef": 1.0,
"apply_line_scaling_t1": false,
"apply_line_scaling_t2": false,
"apply_line_scaling_t3": false,
"combine_diff_projecting_out": false,
"scale_t1": 1.0,
"scale_t2": 6.0,
"scale_t3": 6.0
}
|
capungmerah627/blockassist-bc-stinging_soaring_porcupine_1756718331
|
capungmerah627
| 2025-09-01T09:44:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging soaring porcupine",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:44:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging soaring porcupine
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kavpro/blockassist-bc-tall_lively_caribou_1756719775
|
kavpro
| 2025-09-01T09:43:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall lively caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:43:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall lively caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NahedDom/blockassist-bc-flapping_stocky_leopard_1756717619
|
NahedDom
| 2025-09-01T09:43:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flapping stocky leopard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:43:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flapping stocky leopard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
popouy/blockassist-bc-bold_stubby_rhino_1756719740
|
popouy
| 2025-09-01T09:42:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bold stubby rhino",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:42:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bold stubby rhino
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnonymousCS/populism_classifier_412
|
AnonymousCS
| 2025-09-01T09:42:07Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"rembert",
"text-classification",
"generated_from_trainer",
"base_model:google/rembert",
"base_model:finetune:google/rembert",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-01T00:04:07Z |
---
library_name: transformers
license: apache-2.0
base_model: google/rembert
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: populism_classifier_412
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# populism_classifier_412
This model is a fine-tuned version of [google/rembert](https://huggingface.co/google/rembert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8313
- Accuracy: 0.9581
- 1-f1: 0.0
- 1-recall: 0.0
- 1-precision: 0.0
- Balanced Acc: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|:--------:|:-----------:|:------------:|
| 0.7117 | 1.0 | 96 | 0.6134 | 0.9581 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.0535 | 2.0 | 192 | 0.8084 | 0.9581 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.9136 | 3.0 | 288 | 0.8313 | 0.9581 | 0.0 | 0.0 | 0.0 | 0.5 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
popouy/blockassist-bc-muscular_grassy_elephant_1756719650
|
popouy
| 2025-09-01T09:41:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular grassy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:40:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular grassy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
godnpeter/pick_pikachu
|
godnpeter
| 2025-09-01T09:41:14Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:godnpeter/pick_pikachu",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-01T09:41:05Z |
---
base_model: lerobot/smolvla_base
datasets: godnpeter/pick_pikachu
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- smolvla
- robotics
- lerobot
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
faisu-eth/blockassist-bc-thick_twitchy_jackal_1756719619
|
faisu-eth
| 2025-09-01T09:41:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thick twitchy jackal",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:40:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thick twitchy jackal
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RikiyaT/mxbai-ettin-17m-msmarco-angle-ft
|
RikiyaT
| 2025-09-01T09:40:58Z | 11 | 0 | null |
[
"safetensors",
"modernbert",
"license:mit",
"region:us"
] | null | 2025-08-31T11:29:18Z |
---
license: mit
---
# RikiyaT/mxbai-ettin-17m-msmarco-angle-ft
Ettin + AnglE fine-tuned embedding model.
- **Base Model**: `RikiyaT/mxbai-ettin-17m-pretrained`
- **Pooling Strategy**: `mean` (avg)
- **Training Method**: AnglE loss (ibn/cln + angle=0.02) on a B-format dataset (text, positive, negative).
- **Data Prompts**: `search_query:` / `search_document:` were used during training data creation.
## Usage
### With SentenceTransformers (recommended)
A ready-to-use SentenceTransformers variant is available at **[RikiyaT/mxbai-ettin-17m-msmarco-angle-ft-st](https://huggingface.co/RikiyaT/mxbai-ettin-17m-msmarco-angle-ft-st)**.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('RikiyaT/mxbai-ettin-17m-msmarco-angle-ft-st')
sentences = ["This is an example sentence", "Each sentence is converted"]
embeddings = model.encode(sentences)
print(embeddings.shape)
```
### With Transformers (this repository)
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("RikiyaT/mxbai-ettin-17m-msmarco-angle-ft", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("RikiyaT/mxbai-ettin-17m-msmarco-angle-ft", trust_remote_code=True)
```
|
sarabarahona/voxblink2-ResNet152-8k
|
sarabarahona
| 2025-09-01T09:39:08Z | 0 | 1 | null |
[
"speaker embeddings",
"wespeaker",
"voxblink2",
"region:us"
] | null | 2025-05-31T00:05:47Z |
---
tags:
- speaker embeddings
- wespeaker
- voxblink2
---
This is a ResNet-152 speaker recognition model trained on the VoxBlink2 dataset, which contains 111,284 speakers.
The model is specifically adapted for telephone speech: the original data was downsampled to 8 kHz, and the GSM codec was applied to 50% of the data to simulate low-bandwidth conditions.
The backbone was trained using the [WeSpeaker toolkit](https://github.com/wenet-e2e/wespeaker), following their standard VoxCeleb recipe.
# **Resuls on SRE-24**
| | EER(%) | min Cprimary|
|--------------|:------:|:-----------:|
| Development | 9.31 | 0.522 |
| Evaluation | 7.59 | 0.562 |
# **Resuls on VoxCeleb1**
| | EER(%) |
|--------------|:------:|
| VoxCeleb1-O | 2.42 |
| VoxCeleb1-E | 2.15 |
| VoxCeleb1-H | 4.32 |
# **Citation**
If you use this model in your research, please cite the following paper:
```bibtex
@inproceedings{barahona25_interspeech,
title = {{Analysis of ABC Frontend Audio Systems for the NIST-SRE24}},
author = {Sara Barahona and Anna Silnova and Ladislav Mošner and Junyi Peng and Oldřich Plchot and Johan Rohdin and Lin Zhang and Jiangyu Han and Petr Palka and Federico Landini and Lukáš Burget and Themos Stafylakis and Sandro Cumani and Dominik Boboš and Miroslav Hlavaček and Martin Kodovsky and Tomaš Pavliček},
year = {2025},
booktitle = {{Interspeech 2025}},
pages = {5763--5767},
doi = {10.21437/Interspeech.2025-2737},
issn = {2958-1796},
}
|
lusxvr/nanoVLM-460M
|
lusxvr
| 2025-09-01T09:38:48Z | 0 | 0 |
nanovlm
|
[
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] |
image-text-to-text
| 2025-09-01T09:38:20Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("lusxvr/nanoVLM-460M")
```
|
Loder-S/blockassist-bc-sprightly_knobby_tiger_1756717961
|
Loder-S
| 2025-09-01T09:38:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sprightly knobby tiger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:38:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sprightly knobby tiger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756719463
|
omerbkts
| 2025-09-01T09:38:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:38:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnonymousCS/populism_classifier_411
|
AnonymousCS
| 2025-09-01T09:36:27Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"rembert",
"text-classification",
"generated_from_trainer",
"base_model:google/rembert",
"base_model:finetune:google/rembert",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-31T23:53:47Z |
---
library_name: transformers
license: apache-2.0
base_model: google/rembert
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: populism_classifier_411
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# populism_classifier_411
This model is a fine-tuned version of [google/rembert](https://huggingface.co/google/rembert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6686
- Accuracy: 0.9118
- 1-f1: 0.0
- 1-recall: 0.0
- 1-precision: 0.0
- Balanced Acc: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|:--------:|:-----------:|:------------:|
| 0.6746 | 1.0 | 91 | 0.6774 | 0.9118 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.6333 | 2.0 | 182 | 0.6769 | 0.9118 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.7695 | 3.0 | 273 | 0.6672 | 0.9118 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.6843 | 4.0 | 364 | 0.6757 | 0.9118 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.7178 | 5.0 | 455 | 0.6668 | 0.9118 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.6505 | 6.0 | 546 | 0.6666 | 0.9118 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.5584 | 7.0 | 637 | 0.6691 | 0.9118 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.8056 | 8.0 | 728 | 0.6686 | 0.9118 | 0.0 | 0.0 | 0.0 | 0.5 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
mizutoukotori/act_so101_v3
|
mizutoukotori
| 2025-09-01T09:34:56Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:mizutoukotori/pick_up_the_yellow_block_v2",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-01T09:34:21Z |
---
datasets: mizutoukotori/pick_up_the_yellow_block_v2
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- robotics
- lerobot
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
Satram/QYA_900_Ej
|
Satram
| 2025-09-01T09:34:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-01T09:34:22Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Satram
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sekirr/blockassist-bc-masked_tenacious_whale_1756719141
|
sekirr
| 2025-09-01T09:33:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:32:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756718609
|
omerbkts
| 2025-09-01T09:23:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:23:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tencent/Hunyuan-7B-Instruct-FP8
|
tencent
| 2025-09-01T09:23:35Z | 140 | 5 |
transformers
|
[
"transformers",
"safetensors",
"hunyuan_v1_dense",
"text-generation",
"conversational",
"base_model:tencent/Hunyuan-4B-Pretrain",
"base_model:quantized:tencent/Hunyuan-4B-Pretrain",
"autotrain_compatible",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] |
text-generation
| 2025-07-30T09:22:42Z |
---
base_model:
- tencent/Hunyuan-4B-Pretrain
library_name: transformers
---
<p align="center">
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
</p><p></p>
<p align="center">
🤗 <a href="https://huggingface.co/tencent/"><b>HuggingFace</b></a> |
🤖 <a href="https://modelscope.cn/organization/Tencent-Hunyuan"><b>ModelScope</b></a> |
🪡 <a href="https://github.com/Tencent/AngelSlim/tree/main"><b>AngelSlim</b></a>
</p>
<p align="center">
🖥️ <a href="https://hunyuan.tencent.com" style="color: red;"><b>Official Website</b></a> |
🕖 <a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a> |
🕹️ <a href="https://hunyuan.tencent.com/"><b>Demo</b></a>
</p>
<p align="center">
<a href="https://github.com/Tencent-Hunyuan/"><b>GITHUB</b></a> |
<a href="https://cnb.cool/tencent/hunyuan/"><b>cnb.cool</b></a> |
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-7B/blob/main/LICENSE"><b>LICENSE</b></a> |
<a href="https://raw.githubusercontent.com/Tencent-Hunyuan/Hunyuan-A13B/main/assets/1751881231452.jpg"><b>WeChat</b></a> |
<a href="https://discord.gg/bsPcMEtV7v"><b>Discord</b></a>
</p>
## Model Introduction
Hunyuan is Tencent's open-source efficient large language model series, designed for versatile deployment across diverse computational environments. From edge devices to high-concurrency production systems, these models deliver optimal performance with advanced quantization support and ultra-long context capabilities.
We have released a series of Hunyuan dense models, comprising both pre-trained and instruction-tuned variants, with parameter scales of 0.5B, 1.8B, 4B, and 7B. These models adopt training strategies similar to the Hunyuan-A13B, thereby inheriting its robust performance characteristics. This comprehensive model family enables flexible deployment optimization - from resource-constrained edge computing with smaller variants to high-throughput production environments with larger models, all while maintaining strong capabilities across diverse scenarios.
### Key Features and Advantages
- **Hybrid Reasoning Support**: Supports both fast and slow thinking modes, allowing users to flexibly choose according to their needs.
- **Ultra-Long Context Understanding**: Natively supports a 256K context window, maintaining stable performance on long-text tasks.
- **Enhanced Agent Capabilities**: Optimized for agent tasks, achieving leading results on benchmarks such as BFCL-v3, τ-Bench and C3-Bench.
- **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
## Related News
* 2025.7.30 We have open-sourced **Hunyuan-0.5B-Pretrain** , **Hunyuan-0.5B-Instruct** , **Hunyuan-1.8B-Pretrain** , **Hunyuan-1.8B-Instruct** , **Hunyuan-4B-Pretrain** , **Hunyuan-4B-Instruct** , **Hunyuan-7B-Pretrain** ,**Hunyuan-7B-Instruct** on Hugging Face.
<br>
## Benchmark
Note: The following benchmarks are evaluated by TRT-LLM-backend on several **base models**.
| Model | Hunyuan-0.5B-Pretrain | Hunyuan-1.8B-Pretrain | Hunyuan-4B-Pretrain | Hunyuan-7B-Pretrain|
|:------------------:|:---------------:|:--------------:|:-------------:|:---------------:|
| MMLU | 54.02 | 64.62 | 74.01 | 79.82 |
| MMLU-Redux | 54.72 | 64.42 | 73.53 | 79 |
| MMLU-Pro | 31.15 | 38.65 | 51.91 | 57.79 |
| SuperGPQA | 17.23 | 24.98 | 27.28 | 30.47 |
| BBH | 45.92 | 74.32 | 75.17 | 82.95 |
| GPQA | 27.76 | 35.81 | 43.52 | 44.07 |
| GSM8K | 55.64 | 77.26 | 87.49 | 88.25 |
| MATH | 42.95 | 62.85 | 72.25 | 74.85 |
| EvalPlus | 39.71 | 60.67 | 67.76 | 66.96 |
| MultiPL-E | 21.83 | 45.92 | 59.87 | 60.41 |
| MBPP | 43.38 | 66.14 | 76.46 | 76.19 |
| CRUX-O | 30.75 | 36.88 | 56.5 | 60.75 |
| Chinese SimpleQA | 12.51 | 22.31 | 30.53 | 38.86 |
| simpleQA (5shot) | 2.38 | 3.61 | 4.21 | 5.69 |
| Topic | Bench | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct|
|:-------------------:|:----------------------------------------------------:|:-------------:|:------------:|:-----------:|:---------------------:|
| **Mathematics** | AIME 2024<br>AIME 2025<br>MATH | 17.2<br>20<br>48.5 | 56.7<br>53.9<br>86 | 78.3<br>66.5<br>92.6 | 81.1<br>75.3<br>93.7 |
| **Science** | GPQA-Diamond<br>OlympiadBench | 23.3<br>29.6 | 47.2<br>63.4 | 61.1<br>73.1 | 60.1<br>76.5 |
| **Coding** | Livecodebench<br>Fullstackbench | 11.1<br>20.9 | 31.5<br>42 | 49.4<br>54.6 | 57<br>56.3 |
| **Reasoning** | BBH<br>DROP<br>ZebraLogic | 40.3<br>52.8<br>34.5 | 64.6<br>76.7<br>74.6 | 83<br>78.2<br>83.5 | 87.8<br>85.9<br>85.1 |
| **Instruction<br>Following** | IF-Eval<br>SysBench | 49.7<br>28.1 | 67.6<br>55.5 | 76.6<br>68 | 79.3<br>72.7 |
| **Agent** | BFCL v3<br> τ-Bench<br>ComplexFuncBench<br> C3-Bench | 49.8<br>14.4<br>13.9<br>45.3 | 58.3<br>18.2<br>22.3<br>54.6 | 67.9<br>30.1<br>26.3<br>64.3 | 70.8<br>35.3<br>29.2<br>68.5 |
| **Long<br>Context** | PenguinScrolls<br>longbench-v2<br>FRAMES | 53.9<br>34.7<br>41.9 | 73.1<br>33.2<br>55.6 | 83.1<br>44.1<br>79.2 | 82<br>43<br>78.6 |
### Use with transformers
First, please install transformers. We will merge it into the main branch later.
```SHELL
pip install "transformers>=4.56.0"
```
Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
1. Pass **"enable_thinking=False"** when calling apply_chat_template.
2. Adding **"/no_think"** before the prompt will force the model not to use perform CoT reasoning. Similarly, adding **"/think"** before the prompt will force the model to perform CoT reasoning.
The following code snippet shows how to use the transformers library to load and apply the model. It also demonstrates how to enable and disable the reasoning mode , and how to parse the reasoning process along with the final output.
we use tencent/Hunyuan-7B-Instruct for example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
import re
model_name_or_path = "tencent/Hunyuan-7B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # You may want to use bfloat16 and/or move to GPU here
messages = [
{"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True,return_tensors="pt",
enable_thinking=True # Toggle thinking mode (default: True)
)
outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=2048)
output_text = tokenizer.decode(outputs[0])
print("output_text=",output_text)
think_pattern = r'<think>(.*?)</think>'
think_matches = re.findall(think_pattern, output_text, re.DOTALL)
answer_pattern = r'<answer>(.*?)</answer>'
answer_matches = re.findall(answer_pattern, output_text, re.DOTALL)
think_content = [match.strip() for match in think_matches][0]
answer_content = [match.strip() for match in answer_matches][0]
print(f"thinking_content:{think_content}\n\n")
print(f"answer_content:{answer_content}\n\n")
```
We recommend using the following set of parameters for inference. Note that our model does not have the default system_prompt.
```json
{
"do_sample": true,
"top_k": 20,
"top_p": 0.8,
"repetition_penalty": 1.05,
"temperature": 0.7
}
```
### Training Data Format
If you need to fine-tune our Instruct model, we recommend processing the data into the following format, corresponding to both slow-thinking and fast-thinking scenarios.
```python
# think_pattern
think = ""
answer = ""
think_pattern = f"<think>\n{think}\n</think>\n<answer>\n{answer}\n</answer>"
# fast think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "/no_think海水为什么是咸的" },
{"role": "assistant", "content": "<think>\n\n</think>\n<answer>\n海水是咸的主要是因为其中含有许多溶解在水中的盐类和矿物质。这些盐类和矿物质来自于地球表面的岩石和土壤中的化学物质,随着时间的推移,它们被带到了海洋中。当海水蒸发时,水分蒸发掉了,但盐类和矿物质仍然留在水中,导致海水变得更加咸味。因此,海水的咸度是由其中的盐类和矿物质的含量决定的。\n</answer>"}
]
# slow think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "1+1=" },
{"role": "assistant", "content": "<think>\n嗯,用户问的是1加1等于多少。首先,我需要确认这是一个基本的算术问题。1加1在十进制的数学体系中,通常的结果是2。不过,可能需要考虑是否有其他情况,比如二进制或者其他数制,但用户没有特别说明,所以默认应该是十进制。另外,有时候可能会有脑筋急转弯的情况,比如在某些语境下1+1可能等于1(比如1滴水加1滴水还是1滴水),但通常数学问题中都是2。所以最准确的回答应该是2。</think>\n<answer>\n在十进制的基本算术运算中,1加1的结果是2。这是数学中最基础的加法运算之一,遵循自然数的加法规则。因此,1 + 1 = 2。\n</answer>"}
]
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("your_tokenizer_path", trust_remote_code=True)
train_ids = tokenizer.apply_chat_template(messages)
```
### Train with LLaMA-Factory
In the following chapter, we will introduce how to use `LLaMA-Factory` to fine-tune the `Hunyuan` model.
#### Prerequisites
Verify installation of the following dependencies:
- **LLaMA-Factory**: Follow [official installation guide](https://github.com/hiyouga/LLaMA-Factory)
- **DeepSpeed** (optional): Follow [official installation guide](https://github.com/deepspeedai/DeepSpeed#installation)
- **Transformer Library**: Use the companion branch (Hunyuan-submitted code is pending review)
```
pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
```
#### Data preparation
We need to prepare a custom dataset:
1. Organize your data in `json` format and place it in the `data` directory in `LLaMA-Factory`. The current implementation uses the `sharegpt` dataset format, which requires the following structure:
```
[
{
"messages": [
{
"role": "system",
"content": "System prompt (optional)"
},
{
"role": "user",
"content": "Human instruction"
},
{
"role": "assistant",
"content": "Model response"
}
]
}
]
```
Refer to the [Data Format](#training-data-format) section mentioned earlier for details.
2. Define your dataset in the data/dataset_info.json file using the following format:
```
"dataset_name": {
"file_name": "dataset.json",
"formatting": "sharegpt",
"columns": {
"messages": "messages"
},
"tags": {
"role_tag": "role",
"content_tag": "content",
"user_tag": "user",
"assistant_tag": "assistant",
"system_tag": "system"
}
}
```
#### Training execution
1. Copy all files from the `train/llama_factory_support/example_configs` directory to the `example/hunyuan` directory in `LLaMA-Factory`.
2. Modify the model path and dataset name in the configuration file `hunyuan_full.yaml`. Adjust other configurations as needed:
```
### model
model_name_or_path: [!!!add the model path here!!!]
### dataset
dataset: [!!!add the dataset name here!!!]
```
3. Execute training commands:
*Single-node training
Note: Set the environment variable DISABLE_VERSION_CHECK to 1 to avoid version conflicts.
```
export DISABLE_VERSION_CHECK=1
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
*Multi-node training
Execute the following command on each node. Configure NNODES, NODE_RANK, MASTER_ADDR, and MASTER_PORT according to your environment:
```
export DISABLE_VERSION_CHECK=1
FORCE_TORCHRUN=1 NNODES=${NNODES} NODE_RANK=${NODE_RANK} MASTER_ADDR=${MASTER_ADDR} MASTER_PORT=${MASTER_PORT} \
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
## Quantization Compression
We used our own [AngleSlim](https://github.com/tencent/AngelSlim) compression tool to produce FP8 and INT4 quantization models. `AngleSlim` is a toolset dedicated to creating a more user-friendly, comprehensive and efficient model compression solution.
### FP8 Quantization
We use FP8-static quantization, FP8 quantization adopts 8-bit floating point format, through a small amount of calibration data (without training) to pre-determine the quantization scale, the model weights and activation values will be converted to FP8 format, to improve the inference efficiency and reduce the deployment threshold. We you can use AngleSlim quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
### Int4 Quantization
We use the GPTQ and AWQ algorithm to achieve W4A16 quantization.
GPTQ processes the model weights layer by layer, uses a small amount of calibration data to minimize the reconfiguration error of the quantized weights, and adjusts the weights layer by layer by the optimization process of approximating the Hessian inverse matrix. The process eliminates the need to retrain the model and requires only a small amount of calibration data to quantize the weights, improving inference efficiency and lowering the deployment threshold.
AWQ using a small amount of calibration data (without the need for training), the amplitude of the activation values is statistically calculated. For each weight channel, a scaling coefficient s is computed to expand the numerical range of important weights, allowing more information to be retained during quantization.
You can use [AngleSlim](https://github.com/tencent/AngelSlim) quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
#### Quantization Benchmark
This subsection describes the Benchmark metrics for the Hunyuan quantitative model.
| Bench | Quantization | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct |
|:-------------:|:---------------------------------:|:----------------------------:|:------------------------------:|:----------------------------:|:----------------------------:|
| DROP | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 52.8<br>51.6<br>50.9<br>48.9 | 76.7<br>75.1<br>73.0<br>71.7 | 78.2<br>78.3<br>78.1<br>78.2 | 85.9<br>86.0<br>85.7<br>85.9 |
| GPQA-Diamond | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 23.3<br>22.5<br>23.3<br>23.3 | 47.2<br>47.7<br>44.43<br>43.62 | 61.1<br>60.2<br>58.1<br>- | 60.1<br>60.1<br>60.0<br>60.1 |
| OlympiadBench | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 29.6<br>29.6<br>26.8<br>26.3 | 63.4<br>62.5<br>60.9<br>61.7 | 73.1<br>73.1<br>71.1<br>71.2 | 76.5<br>76.6<br>76.2<br>76.4 |
| AIME 2024 | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 17.2<br>17.2<br>-<br>- | 56.7<br>55.17<br>-<br>- | 78.3<br>76.6<br>-<br>- | 81.1<br>80.9<br>81.0<br>80.9 |
## Deployment
For deployment, you can use frameworks such as **TensorRT-LLM**, **vLLM**, or **SGLang** to serve the model and create an OpenAI-compatible API endpoint.
image: https://hub.docker.com/r/hunyuaninfer/hunyuan-7B/tags
### TensorRT-LLM
#### Docker Image
We provide a pre-built Docker image based on the latest version of TensorRT-LLM.
We use tencent/Hunyuan-7B-Instruct for example
- To get started:
https://hub.docker.com/r/hunyuaninfer/hunyuan-large/tags
```
docker pull hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
```
docker run --privileged --user root --name hunyuanLLM_infer --rm -it --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --gpus=all hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
- Prepare Configuration file:
```
cat >/path/to/extra-llm-api-config.yml <<EOF
use_cuda_graph: true
cuda_graph_padding_enabled: true
cuda_graph_batch_sizes:
- 1
- 2
- 4
- 8
- 16
- 32
print_iter_log: true
EOF
```
- Start the API server:
```
trtllm-serve \
/path/to/HunYuan-moe-7B \
--host localhost \
--port 8000 \
--backend pytorch \
--max_batch_size 32 \
--max_num_tokens 16384 \
--tp_size 2 \
--kv_cache_free_gpu_memory_fraction 0.6 \
--trust_remote_code \
--extra_llm_api_options /path/to/extra-llm-api-config.yml
```
### vllm
#### Start
Please use vLLM version v0.10.0 or higher for inference.
We use tencent/Hunyuan-7B-Instruct for example
- Download Model file:
- Huggingface: will download automicly by vllm.
- ModelScope: `modelscope download --model Tencent-Hunyuan/Hunyuan-7B-Instruct`
- model download by huggingface:
```shell
export MODEL_PATH=tencent/Hunyuan-7B-Instruct
```
- model downloaded by modelscope:
```shell
export MODEL_PATH=/root/.cache/modelscope/hub/models/Tencent-Hunyuan/Hunyuan-7B-Instruct/
```
- Start the API server:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--quantization experts_int8 \
--served-model-name hunyuan \
2>&1 | tee log_server.txt
```
- After running service script successfully, run the request script
```shell
curl http://0.0.0.0:8000/v1/chat/completions -H 'Content-Type: application/json' -d '{
"model": "hunyuan",
"messages": [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [{"type": "text", "text": "请按面积大小对四大洋进行排序,并给出面积最小的洋是哪一个?直接输出结果。"}]
}
],
"max_tokens": 2048,
"temperature":0.7,
"top_p": 0.6,
"top_k": 20,
"repetition_penalty": 1.05,
"stop_token_ids": [127960]
}'
```
#### Quantitative model deployment
This section describes the process of deploying a post-quantization model using vLLM.
Default server in BF16.
##### Int8 quantitative model deployment
Deploying the Int8-weight-only version of the HunYuan-7B model only requires setting the environment variables
Next we start the Int8 service. Run:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization experts_int8 \
2>&1 | tee log_server.txt
```
##### Int4 quantitative model deployment
Deploying the Int4-weight-only version of the HunYuan-7B model only requires setting the environment variables , using the GPTQ method
```shell
export MODEL_PATH=PATH_TO_INT4_MODEL
```
Next we start the Int4 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization gptq_marlin \
2>&1 | tee log_server.txt
```
##### FP8 quantitative model deployment
Deploying the W8A8C8 version of the HunYuan-7B model only requires setting the environment variables
Next we start the FP8 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--kv-cache-dtype fp8 \
2>&1 | tee log_server.txt
```
### SGLang
#### Docker Image
We also provide a pre-built Docker image based on the latest version of SGLang.
We use tencent/Hunyuan-7B-Instruct for example
To get started:
- Pull the Docker image
```
docker pull lmsysorg/sglang:latest
```
- Start the API server:
```
docker run --entrypoint="python3" --gpus all \
--shm-size 32g \
-p 30000:30000 \
--ulimit nproc=10000 \
--privileged \
--ipc=host \
lmsysorg/sglang:latest \
-m sglang.launch_server --model-path hunyuan/huanyuan_7B --tp 4 --trust-remote-code --host 0.0.0.0 --port 30000
```
## Contact Us
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email (hunyuan_opensource@tencent.com).
|
AnonymousCS/populism_classifier_409
|
AnonymousCS
| 2025-09-01T09:22:31Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"rembert",
"text-classification",
"generated_from_trainer",
"base_model:google/rembert",
"base_model:finetune:google/rembert",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-31T23:40:03Z |
---
library_name: transformers
license: apache-2.0
base_model: google/rembert
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: populism_classifier_409
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# populism_classifier_409
This model is a fine-tuned version of [google/rembert](https://huggingface.co/google/rembert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7528
- Accuracy: 0.9647
- 1-f1: 0.0
- 1-recall: 0.0
- 1-precision: 0.0
- Balanced Acc: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|:--------:|:-----------:|:------------:|
| 0.9293 | 1.0 | 135 | 0.5705 | 0.9647 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.0241 | 2.0 | 270 | 0.7964 | 0.9647 | 0.0 | 0.0 | 0.0 | 0.5 |
| 1.0239 | 3.0 | 405 | 0.7528 | 0.9647 | 0.0 | 0.0 | 0.0 | 0.5 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.4.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
aaa961/finetuned-bge-m3-base-en
|
aaa961
| 2025-09-01T09:22:05Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:336",
"loss:BatchSemiHardTripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-01T09:21:08Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:336
- loss:BatchSemiHardTripletLoss
base_model: BAAI/bge-m3
widget:
- source_sentence: "Semicolons in Emmet abbreviations inside SASS files <!-- ⚠️⚠️\
\ Do Not Delete This! bug_report_template ⚠️⚠️ -->\r\n<!-- Please read our Rules\
\ of Conduct: https://opensource.microsoft.com/codeofconduct/ -->\r\n<!-- Please\
\ search existing issues to avoid creating duplicates. -->\r\n<!-- Also please\
\ test using the latest insiders build to make sure your issue has not already\
\ been fixed: https://code.visualstudio.com/insiders/ -->\r\n\r\n<!-- Use Help\
\ > Report Issue to prefill these. -->\r\n- VSCode Version: 1.53.0\r\n- OS Version:\
\ Windows 10 Pro 20H2\r\n\r\nSteps to Reproduce:\r\n\r\n1. Open SASS file.\r\n\
2. Expand any Emmet abbreviation.\r\n3. The expanded line ends with a semicolon,\
\ which is unacceptable for SASS.\r\n4. The issue appeared after 1.53.0 update.\r\
\n\r\n<!-- Launch with `code --disable-extensions` to check. -->\r\nDoes this\
\ issue occur when all extensions are disabled?: Yes/No\r\n"
sentences:
- "Breakpoint decorations go to wrong place - Open menuService.ts\r\n- Set some\
\ breakpoints\r\n- They show in the wrong place and cover a character, but seemingly\
\ get fixed when setting a breakpoint on the next line\r\n\r\n\r\
\n"
- "Breadcrumbs/selectbox dropdown does not relayout after resizing panel <!-- ⚠️⚠️\
\ Do Not Delete This! bug_report_template ⚠️⚠️ -->\r\n<!-- Please read our Rules\
\ of Conduct: https://opensource.microsoft.com/codeofconduct/ -->\r\n<!-- Please\
\ search existing issues to avoid creating duplicates. -->\r\n<!-- Also please\
\ test using the latest insiders build to make sure your issue has not already\
\ been fixed: https://code.visualstudio.com/insiders/ -->\r\n\r\n<!-- Use Help\
\ > Report Issue to prefill these. -->\r\n- VSCode Version: stable and insiders\r\
\n\r\nSteps to Reproduce:\r\n\r\n1. Click on a breadcrumb/selectbox so the dropdown\
\ shows up\r\n2. Resize panel\r\n3. :bug: Observe the dropdown does not relayout\
\ \r\n\r\n\r\
\n\r\
\n\r\n"
- "Terminal Sticky Scroll disappears when the beginning (not the end) of code block\
\ reaches the end of the scope Testing #199240\r\n\r\nSo I was testing the terminal\
\ sticky scroll and I noticed that the sticky code block disappears in an abrupt\
\ manner when the beginning (not the end) of the multiline terminal command, reaches\
\ the end of the scope. To give more context about what I mean, I added the GIF\
\ below. I was sort of expecting the code block to start disappearing when the\
\ line `> ` reached the end of the output for that command. I noticed that the\
\ code block disappeared however when the line `echo hi; \\` reached the end of\
\ the output for that run.\r\n\r\nhttps://github.com/microsoft/vscode/assets/61460952/8c92b925-f29a-498a-a8f0-cd438e6d6c6c\r\
\n\r\nI don't think this is a problem per say, it's a different manner of animating\
\ sticky scroll. I was just wondering if this is intended? I think the potential\
\ issue with this approach is that the sticky scroll block is hiding the next\
\ terminal command and its output while you are scrolling until it disappears.\
\ In the editor, sticky scroll starts disappearing progressively when the bottom\
\ of the last block/line touches the end of the scope. Perhaps we could explore\
\ something similar?\r\n\r\nhttps://github.com/microsoft/vscode/assets/61460952/e3c8fdf2-7dc1-4363-a9f7-ff403a36f533\r\
\n\r\n\r\n"
- source_sentence: "announce what is in the input box Type: <b>Feature Request</b>\r\
\n\r\nWhen using the command palette with control + shift + p or searching for\
\ files with control + p, it is not possible to spell with right or left arrows\
\ what is being typed or what has been pasted.\r\n\r\nThis would make it easier\
\ to correct what was typed in these areas or change parts after pasting something.\r\
\n\r\nI emphasize that the adjustment to be able to spell should not change the\
\ behavior of using up or down arrows to navigate suggestions as it currently\
\ works.\r\n\r\nThank you very much in advance.\r\nVS Code version: Code - Insiders\
\ 1.71.0-insider (235a92a3693176fbf9af16ec5b829c458a1d682f, 2022-08-22T05:16:19.478Z)\r\
\nOS version: Windows_NT x64 10.0.19044\r\nModes:\r\nSandboxed: Yes\r\nRemote\
\ OS version: Linux x64 5.10.16.3-microsoft-standard-WSL2\r\n\r\n\r\n<!-- generated\
\ by issue reporter -->"
sentences:
- "Cleared terminal content comes back after switching between sessions Type: <b>Bug</b>\r\
\n\r\n1. Print something in integrated terminal\r\n2. Right click, Clear\r\n3.\
\ Create a new terminal session\r\n4. Switch back to previous terminal session\r\
\n\r\n\U0001F41B: Cleared terminal content comes back\r\n\r\nhttps://user-images.githubusercontent.com/1330321/208982497-007cb17e-a2e8-490f-9753-569e60758522.mp4\r\
\n\r\n\r\nVS Code version: Code - Insiders 1.75.0-insider (c437ec2497ccd6b5ff5b2f6da86a8042e1ed951a,\
\ 2022-12-21T05:41:44.328Z)\r\nOS version: Windows_NT x64 10.0.25267\r\nModes:\r\
\nSandboxed: Yes\r\n\r\n<details>\r\n<summary>System Info</summary>\r\n\r\n|Item|Value|\r\
\n|---|---|\r\n|CPUs|AMD Ryzen 5 5600X 6-Core Processor (12 x 3700)|\r\
\n|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor:\
\ disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl:\
\ enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_renderer:\
\ enabled_on<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl:\
\ enabled<br>webgl2: enabled<br>webgpu: disabled_off|\r\n|Load (avg)|undefined|\r\
\n|Memory (System)|31.91GB (14.12GB free)|\r\n|Process Argv||\r\n|Screen Reader|no|\r\
\n|VM|0%|\r\n</details><details><summary>Extensions (26)</summary>\r\n\r\nExtension|Author\
\ (truncated)|Version\r\n---|---|---\r\nBookmarks|ale|13.3.1\r\ngitlens|eam|13.2.0\r\
\nprettier-vscode|esb|9.10.3\r\ncopilot-nightly|Git|1.65.7751\r\nremotehub|Git|0.49.2022121901\r\
\nvscode-env|Iro|0.1.0\r\nsearch-node-modules|jas|1.3.0\r\nfluent-icons|mig|0.0.18\r\
\nremote-containers|ms-|0.268.0\r\nremote-ssh|ms-|0.95.2022121915\r\nremote-ssh-edit|ms-|0.84.0\r\
\nremote-wsl|ms-|0.72.0\r\nhexeditor|ms-|1.9.9\r\njs-debug-nightly|ms-|2022.12.1417\r\
\nremote-explorer|ms-|0.1.2022121209\r\nremote-repositories|ms-|0.26.0\r\nvscode-typescript-next|ms-|5.0.202212200\r\
\nabracadabra|nic|6.17.0\r\nmaterial-icon-theme|PKi|4.23.0\r\nprisma|Pri|4.8.0\r\
\nvscode-yaml|red|1.10.1\r\nreveal|smu|1.2.7\r\nworkspace-config-plus|Swe|0.2.2\r\
\nluna-paint|Tyr|0.16.0\r\nmarkdown-all-in-one|yzh|3.5.0\r\nmaterial-theme|zhu|3.15.6\r\
\n\r\n\r\n</details>\r\n<!-- generated by issue reporter -->"
- "explorer display folder is symbolic incorrectly \r\nhttps://user-images.githubusercontent.com/41773861/228953537-db765845-7f47-41e1-978d-56eda8199f0e.mov\r\
\n\r\nreproduce steps:\r\n\r\n1. git clone git@github.com:tjx666/module-resolution.git\r\
\n2. git reset --hard 126808cb72b1cfa5d1f0d0c6a3f25ed69491f536\r\n3. pnpm install\r\
\n4. open apps/esm-app/node_modules, you can see vscode show the folder `commonjs`\
\ is a symbolic link\r\n5. delete folder apps/esm-app/node_modules/commonjs\r\n\
6. copy packages/commonjs to folder apps/esm-app/node_modules/commonjs\r\n7. the\
\ vscode still show folder apps/esm-app/node_modules/commonjs is a symbolic link,\
\ but actually not\r\n\r\n<img width=\"920\" alt=\"image\" src=\"https://user-images.githubusercontent.com/41773861/228955043-c91f59bf-7f12-4d25-8758-732e3fd5b542.png\"\
>\r\n"
- "Command \"Terminal: Clear\" and Ctrl+Left make cursur jump to bottom <!-- ⚠️⚠️\
\ Do Not Delete This! bug_report_template ⚠️⚠️ -->\r\n<!-- Please read our Rules\
\ of Conduct: https://opensource.microsoft.com/codeofconduct/ -->\r\n<!-- \U0001F56E\
\ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions\
\ -->\r\n<!-- \U0001F50E Search existing issues to avoid creating duplicates.\
\ -->\r\n<!-- \U0001F9EA Test using the latest Insiders build to see if your issue\
\ has already been fixed: https://code.visualstudio.com/insiders/ -->\r\n<!--\
\ \U0001F4A1 Instead of creating your report here, use 'Report Issue' from the\
\ 'Help' menu in VS Code to pre-fill useful information. -->\r\n<!-- \U0001F527\
\ Launch with `code --disable-extensions` to check. -->\r\nDoes this issue occur\
\ when all extensions are disabled?: Yes\r\n\r\n<!-- \U0001FA93 If you answered\
\ No above, use 'Help: Start Extension Bisect' from Command Palette to try to\
\ identify the cause. -->\r\n<!-- \U0001F4E3 Issues caused by an extension need\
\ to be reported directly to the extension publisher. The 'Help > Report Issue'\
\ dialog can assist with this. -->\r\n- VS Code Version: 1.74.0-insider\r\n- OS\
\ Version: Windows_NT x64 10.0.19044\r\n\r\nSteps to Reproduce:\r\n\r\n1. Click\
\ menu item \"Terminal > New Terminal\".\r\n2. Run `echo foo`.\r\n3. Hold Enter\
\ to fill the terminal view.\r\n4. Run command `Terminal: Clear` from the command\
\ palette.\r\n5. Press up-arrow key to fill the current line with the command\
\ `echo foo` again. Don't run it.\r\n6. Press Ctrl+left-arrow key. The cursor\
\ jumps to the last line in the terminal view, which is empty, even though we\
\ were in the first line.\r\n7. (Hitting Enter runs the command and fills the\
\ whole terminal view again.)\r\n"
- source_sentence: "Settings editor: focus border is cut off \r\n<img width=\"1408\"\
\ alt=\"Screenshot 2023-05-26 at 10 32 44\" src=\"https://github.com/microsoft/vscode/assets/22350/aec258c6-f4bd-4585-930b-5b6ed8698ca4\"\
>\r\n\r\n\r\ncc @roblourens "
sentences:
- "Drag and drop in notebooks causes unexpected scrolling Testing #119715\r\n\r\n\
(using `\"notebook.experimental.useMarkdownRenderer\": false`)\r\n\r\nMoving cells\
\ via drag and drops makes the scroll bar act a little unexpectedly. I think the\
\ behavior I expect to to keep ensure scrollTop remains identical after dropping.\r\
\n\r\n\r\
\n\r\n"
- "VSCode Terminal History Being Set To Local Directory History After Update After\
\ a recent update of VSCode - Insiders, I'm getting local terminal history instead\
\ of global history when using Zsh.\r\n\r\n<!-- \U0001F527 Launch with `code --disable-extensions`\
\ to check. -->\r\nDoes this issue occur when all extensions are disabled?: Yes\r\
\n\r\n- VS Code Version: code-insiders 1.74.0-insider (Universal)\r\n- OS Version:\
\ macOS 12.6\r\n\r\nSteps to Reproduce:\r\n\r\n1. open VSCode - Insiders\r\n2.\
\ open the VSCode - Insiders integrated terminal using Zsh\r\n3. type `history\
\ | wc -l`\r\n4. open a regular macOS terminal\r\n5. type `history | wc -l`\r\n\
\r\nExpected:\r\n\r\nI expected to see the global history within the VS Code -\
\ Insiders integrated terminal.\r\n\r\nActual:\r\n\r\nThe VS Code - Insiders integrated\
\ terminal is showing only local history.\r\n\r\nThe history within the VSCode\
\ - Insiders integrated terminal should display the same global history as it\
\ appears in the normal macOS terminal.\r\n"
- "Emmet Expansion: Count down ($@-) doesn't work Using a multiplier with the count\
\ down syntax, ($@-), produces incorrect results. Per the example on [docs.emmet.io](https://docs.emmet.io/cheat-sheet/\
\ ) under `Item numbering: $`:\r\n\r\n`ul>li.item$@-*5` should expand to:\r\n\r\
\n```\r\n<ul>\r\n <li class=\"item5\"></li>\r\n <li class=\"item4\"></li>\r\
\n <li class=\"item3\"></li>\r\n <li class=\"item2\"></li>\r\n <li class=\"\
item1\"></li>\r\n</ul>\r\n```\r\n\r\nActual output:\r\n\r\n```\r\n<ul>\r\n \
\ <li class=\"item1@-\"></li>\r\n <li class=\"item2@-\"></li>\r\n <li class=\"\
item3@-\"></li>\r\n <li class=\"item4@-\"></li>\r\n <li class=\"item5@-\"\
></li>\r\n</ul>\r\n```"
- source_sentence: "Terminal Rename Failure Type: <b>Bug</b>\r\n\r\nTerminal left-click\
\ renaming fails\r\n\r\
\n\r\nVS Code version: Code 1.83.1 (f1b07bd25dfad64b0167beb15359ae573aecd2cc,\
\ 2023-10-10T23:48:05.904Z)\r\nOS version: Windows_NT x64 10.0.22621\r\nModes:\r\
\n\r\n\r\n<!-- generated by issue reporter -->"
sentences:
- "Source control shows empty original file in side-by-side comparison when project\
\ is located on file share Issue Type: <b>Bug</b>\r\n\r\n1. Clone a git repository\
\ on a remote file share\r\n2. Open folder \\\\\\remote_system\\project_folder\r\
\n3. Edit one of the files\r\n4. Open source control pane\r\n5. Select modified\
\ file\r\n\r\nExpected result: Side-by-side comparison opens showing changes\r\
\n\r\nActual result: Side-by-side comparison opens but the original file is empty\
\ and all lines in the file are marked as new. Running git diff in terminal on\
\ the file does show the modified lines.\r\n\r\nVS Code version: Code 1.64.1 (d6ee99e4c045a6716e5c653d7da8e9ae6f5a8b03,\
\ 2022-02-07T17:16:22.147Z)\r\nOS version: Windows_NT x64 10.0.19044\r\nRestricted\
\ Mode: No\r\n\r\n<details>\r\n<summary>System Info</summary>\r\n\r\n|Item|Value|\r\
\n|---|---|\r\n|CPUs|Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz (4 x 2808)|\r\n\
|GPU Status|2d_canvas: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads:\
\ enabled_on<br>oop_rasterization: enabled<br>opengl: enabled_on<br>rasterization:\
\ enabled<br>skia_renderer: enabled_on<br>video_decode: enabled<br>vulkan: disabled_off<br>webgl:\
\ enabled<br>webgl2: enabled|\r\n|Load (avg)|undefined|\r\n|Memory (System)|31.86GB\
\ (18.00GB free)|\r\n|Process Argv|--crash-reporter-id 32eba180-7fe3-4179-bbbd-4c1a47ffb697|\r\
\n|Screen Reader|no|\r\n|VM|60%|\r\n</details><details><summary>Extensions (1)</summary>\r\
\n\r\nExtension|Author (truncated)|Version\r\n---|---|---\r\nqlik|Gim|0.5.0\r\n\
\r\n\r\n</details><details>\r\n<summary>A/B Experiments</summary>\r\n\r\n```\r\
\nvsliv368:30146709\r\nvsreu685:30147344\r\npython383cf:30185419\r\nvspor879:30202332\r\
\nvspor708:30202333\r\nvspor363:30204092\r\nvstes627cf:30244335\r\npythonvspyl392cf:30425750\r\
\npythontb:30283811\r\npythonptprofiler:30281270\r\nvsdfh931:30280409\r\nvshan820:30294714\r\
\nvstes263:30335439\r\npythondataviewer:30285071\r\nvscod805:30301674\r\npythonvspyt200:30340761\r\
\nbinariesv615:30325510\r\nbridge0708:30335490\r\nbridge0723:30353136\r\nvsaa593cf:30376535\r\
\nvsc1dst:30433059\r\npythonvs932:30410667\r\nwslgetstartedc:30433508\r\nvscop453:30404998\r\
\nvsrem710:30416614\r\n\r\n```\r\n\r\n</details>\r\n\r\n<!-- generated by issue\
\ reporter -->"
- "Tabs aren't aligned * move the sash back and worth\r\n* \U0001F9C0 the tabs aren't\
\ left aligned anymore\r\n\r\n<img width=\"253\" alt=\"Screenshot 2021-04-15 at\
\ 17 42 15\" src=\"https://user-images.githubusercontent.com/1794099/114897873-f2331a00-9e11-11eb-8157-2a077ad31853.png\"\
>\r\n"
- "Opening quick text search with a selection should select the text in the input\
\ by default Steps to Reproduce:\r\n\r\n* set up a keybinding to launch quick\
\ text search\r\n* open a diff editor\r\n* select some text\r\n* launch quick\
\ text search\r\n\r\nExpected:\r\n\r\nThe text to be highlighted:\r\n\r\n\r\n\
\r\
\n\r\nActual:\r\n\r\nThe text is not highlighted:\r\n\r\n\r\n\r\n\r\
\n\r\n\r\nHighlighting it is good because if I _don't_ want my selection to be\
\ used, I just type over it. This aligns with the find widget:\r\n\r\n\r\n\r\
\n\r\n"
- source_sentence: "Shell integration: bash and zsh don't serialize \\n and ; characters\
\ Part of https://github.com/microsoft/vscode/issues/155639\r\n\r\nRepro:\r\n\r\
\n1. Open a bash or zsh session\r\n2. Run:\r\n ```sh\r\n echo \"a\r\n … b\"\
\r\n ```\r\n \r\
\n3. ctrl+alt+r to run recent command, select the last command, \U0001F41B it's\
\ run without the new line\r\n \r\
\n"
sentences:
- "TreeView state out of sync Testing #117304\r\n\r\nRepro: Not Sure\r\n\r\nTest\
\ state shows passed in file but still running in tree view.\r\n\r\n\r\
\n"
- "rename terminal in new window says no argument provided Testing #196195\r\n\r\
\n\r\n\r\nhttps://github.com/microsoft/vscode/assets/29464607/43eb0432-9ca4-435d-ad9c-ed084bb93f5a\r\
\n\r\n\r\n\r\n"
- "Setting icon and color in createTerminal API no longer works correctly See https://github.com/fabiospampinato/vscode-terminals/issues/77\r\
\n\r\nLooks like the default tab color/icon change probably regressed this.\r\n\
\r\n"
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: SentenceTransformer based on BAAI/bge-m3
results:
- task:
type: triplet
name: Triplet
dataset:
name: bge base en train
type: bge-base-en-train
metrics:
- type: cosine_accuracy
value: 1.0
name: Cosine Accuracy
- type: cosine_accuracy
value: 0.9523809552192688
name: Cosine Accuracy
---
# SentenceTransformer based on BAAI/bge-m3
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'})
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("aaa961/finetuned-bge-m3-base-en")
# Run inference
sentences = [
'Shell integration: bash and zsh don\'t serialize \\n and ; characters Part of https://github.com/microsoft/vscode/issues/155639\r\n\r\nRepro:\r\n\r\n1. Open a bash or zsh session\r\n2. Run:\r\n ```sh\r\n echo "a\r\n … b"\r\n ```\r\n \r\n3. ctrl+alt+r to run recent command, select the last command, 🐛 it\'s run without the new line\r\n \r\n',
'TreeView state out of sync Testing #117304\r\n\r\nRepro: Not Sure\r\n\r\nTest state shows passed in file but still running in tree view.\r\n\r\n\r\n',
'Setting icon and color in createTerminal API no longer works correctly See https://github.com/fabiospampinato/vscode-terminals/issues/77\r\n\r\nLooks like the default tab color/icon change probably regressed this.\r\n\r\n',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.4264, 0.4315],
# [0.4264, 1.0000, 0.4278],
# [0.4315, 0.4278, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `bge-base-en-train`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:--------|
| **cosine_accuracy** | **1.0** |
#### Triplet
* Dataset: `bge-base-en-train`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.9524** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 336 training samples
* Columns: <code>texts</code> and <code>label</code>
* Approximate statistics based on the first 336 samples:
| | texts | label |
|:--------|:------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| type | string | int |
| details | <ul><li>min: 12 tokens</li><li>mean: 340.9 tokens</li><li>max: 996 tokens</li></ul> | <ul><li>0: ~1.19%</li><li>5: ~0.60%</li><li>7: ~0.60%</li><li>8: ~0.60%</li><li>9: ~0.60%</li><li>10: ~0.60%</li><li>12: ~0.60%</li><li>13: ~0.60%</li><li>15: ~0.60%</li><li>16: ~0.60%</li><li>17: ~0.60%</li><li>19: ~0.60%</li><li>20: ~0.60%</li><li>21: ~0.60%</li><li>25: ~0.60%</li><li>26: ~0.60%</li><li>27: ~0.60%</li><li>28: ~0.60%</li><li>29: ~0.89%</li><li>30: ~0.89%</li><li>31: ~2.08%</li><li>33: ~1.49%</li><li>34: ~0.60%</li><li>36: ~0.60%</li><li>37: ~0.89%</li><li>38: ~0.60%</li><li>42: ~0.89%</li><li>43: ~1.19%</li><li>45: ~0.60%</li><li>47: ~0.60%</li><li>48: ~0.60%</li><li>49: ~0.60%</li><li>50: ~0.60%</li><li>51: ~0.60%</li><li>52: ~1.19%</li><li>53: ~0.60%</li><li>55: ~0.60%</li><li>57: ~1.19%</li><li>59: ~0.60%</li><li>60: ~0.60%</li><li>61: ~0.60%</li><li>62: ~0.60%</li><li>63: ~0.60%</li><li>64: ~0.89%</li><li>65: ~0.60%</li><li>67: ~0.60%</li><li>68: ~0.89%</li><li>69: ~0.60%</li><li>70: ~0.60%</li><li>71: ~0.60%</li><li>72: ~0.60%</li><li>73: ~0.60%</li><li>74: ~0.60%</li><li>75: ~0.60%</li><li>77: ~0.60%</li><li>78: ~0.60%</li><li>82: ~0.89%</li><li>84: ~0.60%</li><li>85: ~0.60%</li><li>86: ~0.60%</li><li>87: ~0.60%</li><li>88: ~0.60%</li><li>89: ~0.60%</li><li>90: ~0.60%</li><li>91: ~0.60%</li><li>92: ~0.89%</li><li>93: ~0.60%</li><li>95: ~0.60%</li><li>96: ~0.60%</li><li>97: ~0.60%</li><li>98: ~0.60%</li><li>99: ~0.60%</li><li>100: ~0.60%</li><li>101: ~0.60%</li><li>103: ~0.60%</li><li>105: ~0.60%</li><li>108: ~0.60%</li><li>109: ~0.60%</li><li>110: ~0.89%</li><li>113: ~0.60%</li><li>115: ~0.60%</li><li>116: ~0.60%</li><li>117: ~0.60%</li><li>118: ~0.60%</li><li>119: ~0.60%</li><li>120: ~1.49%</li><li>121: ~0.60%</li><li>126: ~0.60%</li><li>127: ~0.60%</li><li>130: ~0.60%</li><li>131: ~0.60%</li><li>134: ~0.60%</li><li>135: ~0.60%</li><li>136: ~0.60%</li><li>137: ~0.60%</li><li>138: ~0.60%</li><li>139: ~0.60%</li><li>140: ~0.60%</li><li>143: ~0.60%</li><li>144: ~0.60%</li><li>147: ~0.60%</li><li>148: ~0.60%</li><li>149: ~0.60%</li><li>150: ~0.60%</li><li>152: ~0.60%</li><li>153: ~0.60%</li><li>154: ~0.60%</li><li>155: ~0.60%</li><li>157: ~0.60%</li><li>158: ~0.60%</li><li>159: ~0.60%</li><li>160: ~0.60%</li><li>161: ~0.60%</li><li>163: ~0.60%</li><li>164: ~0.60%</li><li>165: ~0.60%</li><li>168: ~0.60%</li><li>170: ~0.60%</li><li>171: ~0.89%</li><li>174: ~0.89%</li><li>175: ~0.60%</li><li>176: ~0.60%</li><li>177: ~0.60%</li><li>178: ~0.60%</li><li>179: ~0.60%</li><li>180: ~0.60%</li><li>181: ~0.60%</li><li>182: ~0.60%</li><li>183: ~0.60%</li><li>185: ~0.60%</li><li>186: ~0.60%</li><li>187: ~0.60%</li><li>192: ~0.60%</li><li>193: ~0.60%</li><li>194: ~0.60%</li><li>195: ~0.89%</li><li>196: ~0.60%</li><li>197: ~0.60%</li><li>198: ~0.60%</li><li>199: ~0.60%</li><li>202: ~0.60%</li><li>204: ~0.60%</li><li>205: ~0.89%</li><li>207: ~0.60%</li><li>208: ~0.60%</li><li>209: ~0.60%</li><li>210: ~0.60%</li><li>213: ~0.60%</li><li>214: ~0.60%</li><li>215: ~0.60%</li><li>216: ~0.60%</li><li>218: ~0.60%</li></ul> |
* Samples:
| texts | label |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>Branch list is sometimes out of order
<br>Type: <b>Bug</b>
<br>
<br>1. Open a workspace
<br>2. Quickly open the branch picker and type `main`
<br>
<br>**Bug**
<br>The first time you do this, sometimes you end up with an unordered list:
<br>
<br>
<br>
<br>
<br>The correct order shows up when you keep start typing or try doing this again:
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>VS Code version: Code - Insiders 1.91.0-insider (Universal) (0354163c1c66b950b0762364f5b4cd37937b624a, 2024-06-26T10:12:33.304Z)
<br>OS version: Darwin arm64 23.5.0
<br>Modes:
<br>
<br><details>
<br><summary>System Info</summary>
<br>
<br>\|Item\|Value\|
<br>\|---\|---\|
<br>\|CPUs\|Apple M2 Max (12 x 2400)\|
<br>\|GPU Status\|2d_canvas: unavailable_software<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: disabled_software<br>multiple_raster_threads: enabled_on<br>ope...</code> | <code>218</code> |
| <code>Git Branch Picker Race Condition If I paste the branch too quickly and then press enter, it does not switch to it, but creates a new branch.
<br>This breaks muscle memory, as it works when you do it slowly.
<br>
<br>
<br>
<br>
<br>Once loading completes, it should select the branch again.</code> | <code>218</code> |
| <code>links aren't discoverable to screen reader users in markdown documents They're only discoverable via visual distinction and the action that can be taken (IE opening them) is only indicated in the tooltip AFAICT.
<br>
<br>https://github.com/microsoft/vscode/assets/29464607/09d28b81-c2cc-4477-b1fc-7b1de1baae74
<br>
<br></code> | <code>177</code> |
* Loss: [<code>BatchSemiHardTripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#batchsemihardtripletloss)
### Evaluation Dataset
#### Unnamed Dataset
* Size: 70 evaluation samples
* Columns: <code>texts</code> and <code>label</code>
* Approximate statistics based on the first 70 samples:
| | texts | label |
|:--------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| type | string | int |
| details | <ul><li>min: 58 tokens</li><li>mean: 303.57 tokens</li><li>max: 864 tokens</li></ul> | <ul><li>1: ~2.86%</li><li>2: ~2.86%</li><li>6: ~2.86%</li><li>11: ~5.71%</li><li>14: ~2.86%</li><li>23: ~2.86%</li><li>32: ~5.71%</li><li>35: ~2.86%</li><li>39: ~2.86%</li><li>40: ~2.86%</li><li>46: ~2.86%</li><li>54: ~2.86%</li><li>83: ~2.86%</li><li>102: ~2.86%</li><li>104: ~4.29%</li><li>111: ~2.86%</li><li>122: ~2.86%</li><li>123: ~2.86%</li><li>125: ~2.86%</li><li>145: ~2.86%</li><li>146: ~2.86%</li><li>162: ~2.86%</li><li>166: ~2.86%</li><li>169: ~2.86%</li><li>184: ~2.86%</li><li>188: ~2.86%</li><li>190: ~2.86%</li><li>200: ~2.86%</li><li>201: ~4.29%</li><li>203: ~2.86%</li><li>206: ~2.86%</li><li>217: ~2.86%</li></ul> |
* Samples:
| texts | label |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>Ctrl+I stopped working after first hold+talk+release Testing #213355
<br>
<br>Screencast shows that it seems to be in the wrong context and is trying to stop the session?
<br>
<br>
<br>
<br>Repro was just asking "Testing testing" and then trying to ask something else</code> | <code>217</code> |
| <code>`Ctrl + I` does not work when chat input field has focus Testing #213355
<br>
<br>`Ctrl + I` works in the editor and when I hold it, I get into speech mode. But when the chat input field (panel or inline chat) already has focus, `Ctrl + I` does not work.
<br>
<br>(Connected to Windows through Remote Desktop in case that matters.)
<br></code> | <code>217</code> |
| <code>Terminal renaming not functioning as expected in editor area <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<br><!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<br><!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<br><!-- 🔎 Search existing issues to avoid creating duplicates. -->
<br><!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<br><!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<br><!-- 🔧 Launch with `code --disable-extensions` to check. -->
<br>Does this issue occur when all extensions are disabled?: Yes
<br>
<br><!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<br><!-- 📣 Issues caused by an extension need to be reported directly to the extensi...</code> | <code>201</code> |
* Loss: [<code>BatchSemiHardTripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#batchsemihardtripletloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 1
- `per_device_eval_batch_size`: 1
- `gradient_accumulation_steps`: 8
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 1
- `per_device_eval_batch_size`: 1
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | bge-base-en-train_cosine_accuracy |
|:------:|:----:|:-------------:|:---------------:|:---------------------------------:|
| -1 | -1 | - | - | 1.0 |
| 2.3810 | 100 | 0.0 | nan | 0.9524 |
| 4.7619 | 200 | 0.0 | nan | 0.9524 |
| -1 | -1 | - | - | 1.0 |
### Framework Versions
- Python: 3.10.10
- Sentence Transformers: 5.1.0
- Transformers: 4.55.3
- PyTorch: 2.7.1+cu128
- Accelerate: 1.10.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### BatchSemiHardTripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
entfane/math-virtuoso-7B
|
entfane
| 2025-09-01T09:21:35Z | 26 | 0 | null |
[
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:TIGER-Lab/MathInstruct",
"base_model:mistralai/Mistral-7B-v0.3",
"base_model:finetune:mistralai/Mistral-7B-v0.3",
"region:us"
] |
text-generation
| 2025-08-19T06:22:11Z |
---
datasets:
- TIGER-Lab/MathInstruct
language:
- en
base_model:
- mistralai/Mistral-7B-v0.3
pipeline_tag: text-generation
---
<img src="https://huggingface.co/entfane/math-virtuoso-7B/resolve/main/math-virtuoso.png" width="400" height="400"/>
# Math Virtuoso 7B
This model is a Math Instruction fine-tuned version of Mistral 7B v0.3 model.
### Inference
```python
!pip install transformers accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "entfane/math-virtuoso-7B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
messages = [
{"role": "user", "content": "What's the derivative of 2x^2?"}
]
input = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
encoded_input = tokenizer(input, return_tensors = "pt").to(model.device)
output = model.generate(**encoded_input, max_new_tokens=1024)
print(tokenizer.decode(output[0], skip_special_tokens=False))
```
|
bah63843/blockassist-bc-plump_fast_antelope_1756718432
|
bah63843
| 2025-09-01T09:21:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:21:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
llm-jp/optimal-sparsity-code-d1024-E64-k16-6.7B-A1.9B
|
llm-jp
| 2025-09-01T09:21:14Z | 8 | 0 | null |
[
"safetensors",
"mixtral",
"arxiv:2508.18672",
"region:us"
] | null | 2025-08-21T15:28:16Z |
## How to cite
If you find our work helpful, please feel free to cite the paper.
```
@article{nakamura2025optimalsparsitymixtureofexpertslanguage,
title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks},
author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota},
year={2025},
eprint={2508.18672},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.18672},
}
```
|
llm-jp/optimal-sparsity-code-d512-E128-k16-3.3B-A520M
|
llm-jp
| 2025-09-01T09:21:08Z | 7 | 0 | null |
[
"safetensors",
"mixtral",
"arxiv:2508.18672",
"region:us"
] | null | 2025-08-21T15:22:00Z |
## How to cite
If you find our work helpful, please feel free to cite the paper.
```
@article{nakamura2025optimalsparsitymixtureofexpertslanguage,
title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks},
author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota},
year={2025},
eprint={2508.18672},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.18672},
}
```
|
llm-jp/optimal-sparsity-code-d2048-E128-k8-52.2B-A3.9B
|
llm-jp
| 2025-09-01T09:21:02Z | 7 | 0 | null |
[
"safetensors",
"mixtral",
"arxiv:2508.18672",
"region:us"
] | null | 2025-08-21T15:40:00Z |
## How to cite
If you find our work helpful, please feel free to cite the paper.
```
@article{nakamura2025optimalsparsitymixtureofexpertslanguage,
title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks},
author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota},
year={2025},
eprint={2508.18672},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.18672},
}
```
|
llm-jp/optimal-sparsity-code-d1024-E128-k8-13.2B-A1.1B
|
llm-jp
| 2025-09-01T09:20:51Z | 8 | 0 | null |
[
"safetensors",
"mixtral",
"arxiv:2508.18672",
"region:us"
] | null | 2025-08-21T15:26:41Z |
## How to cite
If you find our work helpful, please feel free to cite the paper.
```
@article{nakamura2025optimalsparsitymixtureofexpertslanguage,
title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks},
author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota},
year={2025},
eprint={2508.18672},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.18672},
}
```
|
llm-jp/optimal-sparsity-code-d1024-E64-k8-6.7B-A1.1B
|
llm-jp
| 2025-09-01T09:20:49Z | 8 | 0 | null |
[
"safetensors",
"mixtral",
"arxiv:2508.18672",
"region:us"
] | null | 2025-08-21T15:26:27Z |
## How to cite
If you find our work helpful, please feel free to cite the paper.
```
@article{nakamura2025optimalsparsitymixtureofexpertslanguage,
title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks},
author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota},
year={2025},
eprint={2508.18672},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.18672},
}
```
|
llm-jp/optimal-sparsity-code-d2048-E16-k4-7.1B-A2.3B
|
llm-jp
| 2025-09-01T09:20:27Z | 7 | 0 | null |
[
"safetensors",
"mixtral",
"arxiv:2508.18672",
"region:us"
] | null | 2025-08-21T15:34:08Z |
## How to cite
If you find our work helpful, please feel free to cite the paper.
```
@article{nakamura2025optimalsparsitymixtureofexpertslanguage,
title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks},
author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota},
year={2025},
eprint={2508.18672},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.18672},
}
```
|
llm-jp/optimal-sparsity-code-d1024-E256-k4-26.0B-A670M
|
llm-jp
| 2025-09-01T09:20:24Z | 8 | 0 | null |
[
"safetensors",
"mixtral",
"arxiv:2508.18672",
"region:us"
] | null | 2025-08-21T15:25:21Z |
## How to cite
If you find our work helpful, please feel free to cite the paper.
```
@article{nakamura2025optimalsparsitymixtureofexpertslanguage,
title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks},
author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota},
year={2025},
eprint={2508.18672},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.18672},
}
```
|
llm-jp/optimal-sparsity-code-d1024-E128-k4-13.2B-A670M
|
llm-jp
| 2025-09-01T09:20:22Z | 8 | 0 | null |
[
"safetensors",
"mixtral",
"arxiv:2508.18672",
"region:us"
] | null | 2025-08-21T15:24:55Z |
## How to cite
If you find our work helpful, please feel free to cite the paper.
```
@article{nakamura2025optimalsparsitymixtureofexpertslanguage,
title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks},
author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota},
year={2025},
eprint={2508.18672},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.18672},
}
```
|
llm-jp/optimal-sparsity-code-d1024-E32-k4-3.5B-A670M
|
llm-jp
| 2025-09-01T09:20:18Z | 8 | 0 | null |
[
"safetensors",
"mixtral",
"arxiv:2508.18672",
"region:us"
] | null | 2025-08-21T15:24:34Z |
## How to cite
If you find our work helpful, please feel free to cite the paper.
```
@article{nakamura2025optimalsparsitymixtureofexpertslanguage,
title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks},
author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota},
year={2025},
eprint={2508.18672},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.18672},
}
```
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756718334
|
Ferdi3425
| 2025-09-01T09:20:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:19:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
llm-jp/optimal-sparsity-code-d2048-E32-k2-13.6B-A1.5B
|
llm-jp
| 2025-09-01T09:19:59Z | 8 | 0 | null |
[
"safetensors",
"mixtral",
"arxiv:2508.18672",
"region:us"
] | null | 2025-08-21T15:30:28Z |
## How to cite
If you find our work helpful, please feel free to cite the paper.
```
@article{nakamura2025optimalsparsitymixtureofexpertslanguage,
title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks},
author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota},
year={2025},
eprint={2508.18672},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.18672},
}
```
|
llm-jp/optimal-sparsity-code-d512-E16-k2-520M-A170M
|
llm-jp
| 2025-09-01T09:19:39Z | 8 | 0 | null |
[
"safetensors",
"mixtral",
"arxiv:2508.18672",
"region:us"
] | null | 2025-08-21T15:04:22Z |
## How to cite
If you find our work helpful, please feel free to cite the paper.
```
@article{nakamura2025optimalsparsitymixtureofexpertslanguage,
title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks},
author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota},
year={2025},
eprint={2508.18672},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.18672},
}
```
|
aisingapore/Llama-SEA-LION-v3.5-70B-R-FP8-Dynamic
|
aisingapore
| 2025-09-01T09:18:50Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"zh",
"vi",
"id",
"th",
"fil",
"ta",
"ms",
"km",
"lo",
"my",
"jv",
"su",
"arxiv:2504.05747",
"base_model:aisingapore/Llama-SEA-LION-v3-70B-IT",
"base_model:quantized:aisingapore/Llama-SEA-LION-v3-70B-IT",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] |
text-generation
| 2025-08-21T04:54:52Z |
---
library_name: transformers
pipeline_tag: text-generation
base_model:
- aisingapore/Llama-SEA-LION-v3-70B-IT
language:
- en
- zh
- vi
- id
- th
- fil
- ta
- ms
- km
- lo
- my
- jv
- su
license: llama3.1
---
<div>
<img src="llama_sea_lion_3.5_70b_r_banner.png"/>
</div>
# Llama-SEA-LION-v3.5-70B-R-FP8-Dynamic
Last updated: 2025-09-01
[**SEA-LION**](https://arxiv.org/abs/2504.05747) is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned
for the Southeast Asia (SEA) region.
### Model Description
<!-- Provide a longer summary of what this model is. -->
SEA-LION stands for *Southeast Asian Languages In One Network*.
Quantization was performed on Llama-SEA-LION-v3.5-70B-R to produce optimized variants that reduce memory requirements
while maintaining model quality. These quantized models support inference on a range of consumer-grade GPUs
and are compatible with various inference engines.
For tokenization, the model employs the default tokenizer used in Llama 3.1-70B-Instruct.
- **Developed by:** Products Pillar, AI Singapore
- **Funded by:** Singapore NRF
- **Model type:** Decoder
- **Context length:** 128k tokens
- **Language(s):** Burmese, Chinese, English, Filipino, Indonesian, Javanese, Khmer, Lao, Malay, Sundanese, Tamil, Thai, Vietnamese
- **License:** [Llama 3.1 Community License](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/blob/main/LICENSE)
- **Quantized from model:** Llama-SEA-LION-v3.5-70B-R
This repo contains FP8-Dynamic format model file for aisingapore/Llama-SEA-LION-v3.5-70B-R
Model Weights included in this repository:
- [Llama-SEA-LION-v3.5-70B-R-FP8-Dynamic](https://huggingface.co/aisingapore/Llama-SEA-LION-v3.5-70B-R-FP8-Dynamic)
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Test Results
For details on Llama-SEA-LION-v3.5-70B-R performance, please refer to the SEA-HELM leaderboard, [Leaderboard results on SEA-HELM](https://leaderboard.sea-lion.ai/).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
The model has not been aligned for safety. Developers and users should perform their own safety
fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
*The model was not tested for robustness against adversarial prompting.* It is important for users to be aware that our model exhibits certain limitations that warrant consideration.
Like many LLMs, the model can hallucinate and occasionally generates irrelevant content,
introducing fictional elements that are not grounded in the provided context.
Users should also exercise caution in interpreting and validating the model's responses
due to the potential inconsistencies.
## More Information
This is the repository for the commercial instruction-tuned model.
The model has not been aligned for safety. Developers and users should perform their own safety
fine-tuning and related security measures. In no event shall the authors be held liable
for any claims, damages, or other liabilities arising from the use of the released weights and codes.
AI Singapore is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore.
Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and
do not reflect the views of the National Research Foundation or the National University of Singapore.
[Link to SEA-LION's GitHub repository](https://github.com/aisingapore/sealion)
For more info, please contact us at sealion@aisingapore.org
## Team
Antonyrex Sajeban, Chan Adwin, Cheng Nicholas, Choa Esther, Huang Yuli, Hulagadri Adithya Venkatadri, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Liew Rachel, Limkonchotiwat Peerat, Liu Bing Jie Darius,
Montalan Jann Railey, Ng Boon Cheong Raymond, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Brandon, Ong Tat-Wee David,
Ong Zhi Hao, Rengarajan Hamsawardhini, Siow Bryan, Susanto Yosephine, Tai Ngee Chia, Tan Choon Meng, Teng Walter,
Teo Eng Sipp Leslie, Teo Wei Yi, Tjhi William, Yeo Yeow Tong, Yong Xianbin
## Contact
sealion@aisingapore.org
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1756716940
|
Sayemahsjn
| 2025-09-01T09:14:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:14:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
goptouy/blockassist-bc-alert_melodic_swan_1756717980
|
goptouy
| 2025-09-01T09:13:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"alert melodic swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:13:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- alert melodic swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
goptouy/blockassist-bc-beaked_frisky_ox_1756717907
|
goptouy
| 2025-09-01T09:12:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked frisky ox",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:11:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked frisky ox
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756717777
|
Ferdi3425
| 2025-09-01T09:10:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:10:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cebbbopwq/blockassist-bc-hoarse_armored_emu_1756717801
|
cebbbopwq
| 2025-09-01T09:10:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hoarse armored emu",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:10:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hoarse armored emu
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sanjeev-bhandari01/taski-v3-Q
|
sanjeev-bhandari01
| 2025-09-01T09:09:57Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-01T09:09:55Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taski-v3-Q
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.44 +/- 2.65
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="sanjeev-bhandari01/taski-v3-Q", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tencent/Hunyuan-7B-Instruct-AWQ-Int4
|
tencent
| 2025-09-01T09:09:47Z | 360 | 0 |
transformers
|
[
"transformers",
"safetensors",
"hunyuan_v1_dense",
"text-generation",
"conversational",
"base_model:tencent/Hunyuan-4B-Pretrain",
"base_model:quantized:tencent/Hunyuan-4B-Pretrain",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2025-07-31T08:33:17Z |
---
base_model:
- tencent/Hunyuan-4B-Pretrain
library_name: transformers
---
<p align="center">
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
</p><p></p>
<p align="center">
🤗 <a href="https://huggingface.co/tencent/"><b>HuggingFace</b></a> |
🤖 <a href="https://modelscope.cn/organization/Tencent-Hunyuan"><b>ModelScope</b></a> |
🪡 <a href="https://github.com/Tencent/AngelSlim/tree/main"><b>AngelSlim</b></a>
</p>
<p align="center">
🖥️ <a href="https://hunyuan.tencent.com" style="color: red;"><b>Official Website</b></a> |
🕖 <a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a> |
🕹️ <a href="https://hunyuan.tencent.com/"><b>Demo</b></a>
</p>
<p align="center">
<a href="https://github.com/Tencent-Hunyuan/"><b>GITHUB</b></a> |
<a href="https://cnb.cool/tencent/hunyuan/"><b>cnb.cool</b></a> |
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-7B/blob/main/LICENSE"><b>LICENSE</b></a> |
<a href="https://raw.githubusercontent.com/Tencent-Hunyuan/Hunyuan-A13B/main/assets/1751881231452.jpg"><b>WeChat</b></a> |
<a href="https://discord.gg/bsPcMEtV7v"><b>Discord</b></a>
</p>
## Model Introduction
Hunyuan is Tencent's open-source efficient large language model series, designed for versatile deployment across diverse computational environments. From edge devices to high-concurrency production systems, these models deliver optimal performance with advanced quantization support and ultra-long context capabilities.
We have released a series of Hunyuan dense models, comprising both pre-trained and instruction-tuned variants, with parameter scales of 0.5B, 1.8B, 4B, and 7B. These models adopt training strategies similar to the Hunyuan-A13B, thereby inheriting its robust performance characteristics. This comprehensive model family enables flexible deployment optimization - from resource-constrained edge computing with smaller variants to high-throughput production environments with larger models, all while maintaining strong capabilities across diverse scenarios.
### Key Features and Advantages
- **Hybrid Reasoning Support**: Supports both fast and slow thinking modes, allowing users to flexibly choose according to their needs.
- **Ultra-Long Context Understanding**: Natively supports a 256K context window, maintaining stable performance on long-text tasks.
- **Enhanced Agent Capabilities**: Optimized for agent tasks, achieving leading results on benchmarks such as BFCL-v3, τ-Bench and C3-Bench.
- **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
## Related News
* 2025.7.30 We have open-sourced **Hunyuan-0.5B-Pretrain** , **Hunyuan-0.5B-Instruct** , **Hunyuan-1.8B-Pretrain** , **Hunyuan-1.8B-Instruct** , **Hunyuan-4B-Pretrain** , **Hunyuan-4B-Instruct** , **Hunyuan-7B-Pretrain** ,**Hunyuan-7B-Instruct** on Hugging Face.
<br>
## Benchmark
Note: The following benchmarks are evaluated by TRT-LLM-backend on several **base models**.
| Model | Hunyuan-0.5B-Pretrain | Hunyuan-1.8B-Pretrain | Hunyuan-4B-Pretrain | Hunyuan-7B-Pretrain|
|:------------------:|:---------------:|:--------------:|:-------------:|:---------------:|
| MMLU | 54.02 | 64.62 | 74.01 | 79.82 |
| MMLU-Redux | 54.72 | 64.42 | 73.53 | 79 |
| MMLU-Pro | 31.15 | 38.65 | 51.91 | 57.79 |
| SuperGPQA | 17.23 | 24.98 | 27.28 | 30.47 |
| BBH | 45.92 | 74.32 | 75.17 | 82.95 |
| GPQA | 27.76 | 35.81 | 43.52 | 44.07 |
| GSM8K | 55.64 | 77.26 | 87.49 | 88.25 |
| MATH | 42.95 | 62.85 | 72.25 | 74.85 |
| EvalPlus | 39.71 | 60.67 | 67.76 | 66.96 |
| MultiPL-E | 21.83 | 45.92 | 59.87 | 60.41 |
| MBPP | 43.38 | 66.14 | 76.46 | 76.19 |
| CRUX-O | 30.75 | 36.88 | 56.5 | 60.75 |
| Chinese SimpleQA | 12.51 | 22.31 | 30.53 | 38.86 |
| simpleQA (5shot) | 2.38 | 3.61 | 4.21 | 5.69 |
| Topic | Bench | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct|
|:-------------------:|:----------------------------------------------------:|:-------------:|:------------:|:-----------:|:---------------------:|
| **Mathematics** | AIME 2024<br>AIME 2025<br>MATH | 17.2<br>20<br>48.5 | 56.7<br>53.9<br>86 | 78.3<br>66.5<br>92.6 | 81.1<br>75.3<br>93.7 |
| **Science** | GPQA-Diamond<br>OlympiadBench | 23.3<br>29.6 | 47.2<br>63.4 | 61.1<br>73.1 | 60.1<br>76.5 |
| **Coding** | Livecodebench<br>Fullstackbench | 11.1<br>20.9 | 31.5<br>42 | 49.4<br>54.6 | 57<br>56.3 |
| **Reasoning** | BBH<br>DROP<br>ZebraLogic | 40.3<br>52.8<br>34.5 | 64.6<br>76.7<br>74.6 | 83<br>78.2<br>83.5 | 87.8<br>85.9<br>85.1 |
| **Instruction<br>Following** | IF-Eval<br>SysBench | 49.7<br>28.1 | 67.6<br>55.5 | 76.6<br>68 | 79.3<br>72.7 |
| **Agent** | BFCL v3<br> τ-Bench<br>ComplexFuncBench<br> C3-Bench | 49.8<br>14.4<br>13.9<br>45.3 | 58.3<br>18.2<br>22.3<br>54.6 | 67.9<br>30.1<br>26.3<br>64.3 | 70.8<br>35.3<br>29.2<br>68.5 |
| **Long<br>Context** | PenguinScrolls<br>longbench-v2<br>FRAMES | 53.9<br>34.7<br>41.9 | 73.1<br>33.2<br>55.6 | 83.1<br>44.1<br>79.2 | 82<br>43<br>78.6 |
### Use with transformers
First, please install transformers. We will merge it into the main branch later.
```SHELL
pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
```
Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
1. Pass **"enable_thinking=False"** when calling apply_chat_template.
2. Adding **"/no_think"** before the prompt will force the model not to use perform CoT reasoning. Similarly, adding **"/think"** before the prompt will force the model to perform CoT reasoning.
The following code snippet shows how to use the transformers library to load and apply the model. It also demonstrates how to enable and disable the reasoning mode , and how to parse the reasoning process along with the final output.
we use tencent/Hunyuan-7B-Instruct for example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
import re
model_name_or_path = "tencent/Hunyuan-7B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # You may want to use bfloat16 and/or move to GPU here
messages = [
{"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True,return_tensors="pt",
enable_thinking=True # Toggle thinking mode (default: True)
)
outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=2048)
output_text = tokenizer.decode(outputs[0])
print("output_text=",output_text)
think_pattern = r'<think>(.*?)</think>'
think_matches = re.findall(think_pattern, output_text, re.DOTALL)
answer_pattern = r'<answer>(.*?)</answer>'
answer_matches = re.findall(answer_pattern, output_text, re.DOTALL)
think_content = [match.strip() for match in think_matches][0]
answer_content = [match.strip() for match in answer_matches][0]
print(f"thinking_content:{think_content}\n\n")
print(f"answer_content:{answer_content}\n\n")
```
We recommend using the following set of parameters for inference. Note that our model does not have the default system_prompt.
```json
{
"do_sample": true,
"top_k": 20,
"top_p": 0.8,
"repetition_penalty": 1.05,
"temperature": 0.7
}
```
### Training Data Format
If you need to fine-tune our Instruct model, we recommend processing the data into the following format, corresponding to both slow-thinking and fast-thinking scenarios.
```python
# think_pattern
think = ""
answer = ""
think_pattern = f"<think>\n{think}\n</think>\n<answer>\n{answer}\n</answer>"
# fast think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "/no_think海水为什么是咸的" },
{"role": "assistant", "content": "<think>\n\n</think>\n<answer>\n海水是咸的主要是因为其中含有许多溶解在水中的盐类和矿物质。这些盐类和矿物质来自于地球表面的岩石和土壤中的化学物质,随着时间的推移,它们被带到了海洋中。当海水蒸发时,水分蒸发掉了,但盐类和矿物质仍然留在水中,导致海水变得更加咸味。因此,海水的咸度是由其中的盐类和矿物质的含量决定的。\n</answer>"}
]
# slow think pattern
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "1+1=" },
{"role": "assistant", "content": "<think>\n嗯,用户问的是1加1等于多少。首先,我需要确认这是一个基本的算术问题。1加1在十进制的数学体系中,通常的结果是2。不过,可能需要考虑是否有其他情况,比如二进制或者其他数制,但用户没有特别说明,所以默认应该是十进制。另外,有时候可能会有脑筋急转弯的情况,比如在某些语境下1+1可能等于1(比如1滴水加1滴水还是1滴水),但通常数学问题中都是2。所以最准确的回答应该是2。</think>\n<answer>\n在十进制的基本算术运算中,1加1的结果是2。这是数学中最基础的加法运算之一,遵循自然数的加法规则。因此,1 + 1 = 2。\n</answer>"}
]
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("your_tokenizer_path", trust_remote_code=True)
train_ids = tokenizer.apply_chat_template(messages)
```
### Train with LLaMA-Factory
In the following chapter, we will introduce how to use `LLaMA-Factory` to fine-tune the `Hunyuan` model.
#### Prerequisites
Verify installation of the following dependencies:
- **LLaMA-Factory**: Follow [official installation guide](https://github.com/hiyouga/LLaMA-Factory)
- **DeepSpeed** (optional): Follow [official installation guide](https://github.com/deepspeedai/DeepSpeed#installation)
- **Transformer Library**: Use the companion branch (Hunyuan-submitted code is pending review)
```
pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
```
#### Data preparation
We need to prepare a custom dataset:
1. Organize your data in `json` format and place it in the `data` directory in `LLaMA-Factory`. The current implementation uses the `sharegpt` dataset format, which requires the following structure:
```
[
{
"messages": [
{
"role": "system",
"content": "System prompt (optional)"
},
{
"role": "user",
"content": "Human instruction"
},
{
"role": "assistant",
"content": "Model response"
}
]
}
]
```
Refer to the [Data Format](#training-data-format) section mentioned earlier for details.
2. Define your dataset in the data/dataset_info.json file using the following format:
```
"dataset_name": {
"file_name": "dataset.json",
"formatting": "sharegpt",
"columns": {
"messages": "messages"
},
"tags": {
"role_tag": "role",
"content_tag": "content",
"user_tag": "user",
"assistant_tag": "assistant",
"system_tag": "system"
}
}
```
#### Training execution
1. Copy all files from the `train/llama_factory_support/example_configs` directory to the `example/hunyuan` directory in `LLaMA-Factory`.
2. Modify the model path and dataset name in the configuration file `hunyuan_full.yaml`. Adjust other configurations as needed:
```
### model
model_name_or_path: [!!!add the model path here!!!]
### dataset
dataset: [!!!add the dataset name here!!!]
```
3. Execute training commands:
*Single-node training
Note: Set the environment variable DISABLE_VERSION_CHECK to 1 to avoid version conflicts.
```
export DISABLE_VERSION_CHECK=1
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
*Multi-node training
Execute the following command on each node. Configure NNODES, NODE_RANK, MASTER_ADDR, and MASTER_PORT according to your environment:
```
export DISABLE_VERSION_CHECK=1
FORCE_TORCHRUN=1 NNODES=${NNODES} NODE_RANK=${NODE_RANK} MASTER_ADDR=${MASTER_ADDR} MASTER_PORT=${MASTER_PORT} \
llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
```
## Quantization Compression
We used our own [AngleSlim](https://github.com/tencent/AngelSlim) compression tool to produce FP8 and INT4 quantization models. `AngleSlim` is a toolset dedicated to creating a more user-friendly, comprehensive and efficient model compression solution.
### FP8 Quantization
We use FP8-static quantization, FP8 quantization adopts 8-bit floating point format, through a small amount of calibration data (without training) to pre-determine the quantization scale, the model weights and activation values will be converted to FP8 format, to improve the inference efficiency and reduce the deployment threshold. We you can use AngleSlim quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
### Int4 Quantization
We use the GPTQ and AWQ algorithm to achieve W4A16 quantization.
GPTQ processes the model weights layer by layer, uses a small amount of calibration data to minimize the reconfiguration error of the quantized weights, and adjusts the weights layer by layer by the optimization process of approximating the Hessian inverse matrix. The process eliminates the need to retrain the model and requires only a small amount of calibration data to quantize the weights, improving inference efficiency and lowering the deployment threshold.
AWQ using a small amount of calibration data (without the need for training), the amplitude of the activation values is statistically calculated. For each weight channel, a scaling coefficient s is computed to expand the numerical range of important weights, allowing more information to be retained during quantization.
You can use [AngleSlim](https://github.com/tencent/AngelSlim) quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
#### Quantization Benchmark
This subsection describes the Benchmark metrics for the Hunyuan quantitative model.
| Bench | Quantization | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct |
|:-------------:|:---------------------------------:|:----------------------------:|:------------------------------:|:----------------------------:|:----------------------------:|
| DROP | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 52.8<br>51.6<br>50.9<br>48.9 | 76.7<br>75.1<br>73.0<br>71.7 | 78.2<br>78.3<br>78.1<br>78.2 | 85.9<br>86.0<br>85.7<br>85.9 |
| GPQA-Diamond | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 23.3<br>22.5<br>23.3<br>23.3 | 47.2<br>47.7<br>44.43<br>43.62 | 61.1<br>60.2<br>58.1<br>- | 60.1<br>60.1<br>60.0<br>60.1 |
| OlympiadBench | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 29.6<br>29.6<br>26.8<br>26.3 | 63.4<br>62.5<br>60.9<br>61.7 | 73.1<br>73.1<br>71.1<br>71.2 | 76.5<br>76.6<br>76.2<br>76.4 |
| AIME 2024 | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 17.2<br>17.2<br>-<br>- | 56.7<br>55.17<br>-<br>- | 78.3<br>76.6<br>-<br>- | 81.1<br>80.9<br>81.0<br>80.9 |
## Deployment
For deployment, you can use frameworks such as **TensorRT-LLM**, **vLLM**, or **SGLang** to serve the model and create an OpenAI-compatible API endpoint.
image: https://hub.docker.com/r/hunyuaninfer/hunyuan-7B/tags
### TensorRT-LLM
#### Docker Image
We provide a pre-built Docker image based on the latest version of TensorRT-LLM.
We use tencent/Hunyuan-7B-Instruct for example
- To get started:
https://hub.docker.com/r/hunyuaninfer/hunyuan-large/tags
```
docker pull hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
```
docker run --privileged --user root --name hunyuanLLM_infer --rm -it --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --gpus=all hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
```
- Prepare Configuration file:
```
cat >/path/to/extra-llm-api-config.yml <<EOF
use_cuda_graph: true
cuda_graph_padding_enabled: true
cuda_graph_batch_sizes:
- 1
- 2
- 4
- 8
- 16
- 32
print_iter_log: true
EOF
```
- Start the API server:
```
trtllm-serve \
/path/to/HunYuan-moe-7B \
--host localhost \
--port 8000 \
--backend pytorch \
--max_batch_size 32 \
--max_num_tokens 16384 \
--tp_size 2 \
--kv_cache_free_gpu_memory_fraction 0.6 \
--trust_remote_code \
--extra_llm_api_options /path/to/extra-llm-api-config.yml
```
### vllm
#### Start
Please use vLLM version v0.10.0 or higher for inference.
We use tencent/Hunyuan-7B-Instruct for example
- Download Model file:
- Huggingface: will download automicly by vllm.
- ModelScope: `modelscope download --model Tencent-Hunyuan/Hunyuan-7B-Instruct`
- model download by huggingface:
```shell
export MODEL_PATH=tencent/Hunyuan-7B-Instruct
```
- model downloaded by modelscope:
```shell
export MODEL_PATH=/root/.cache/modelscope/hub/models/Tencent-Hunyuan/Hunyuan-7B-Instruct/
```
- Start the API server:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--quantization experts_int8 \
--served-model-name hunyuan \
2>&1 | tee log_server.txt
```
- After running service script successfully, run the request script
```shell
curl http://0.0.0.0:8000/v1/chat/completions -H 'Content-Type: application/json' -d '{
"model": "hunyuan",
"messages": [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [{"type": "text", "text": "请按面积大小对四大洋进行排序,并给出面积最小的洋是哪一个?直接输出结果。"}]
}
],
"max_tokens": 2048,
"temperature":0.7,
"top_p": 0.6,
"top_k": 20,
"repetition_penalty": 1.05,
"stop_token_ids": [127960]
}'
```
#### Quantitative model deployment
This section describes the process of deploying a post-quantization model using vLLM.
Default server in BF16.
##### Int8 quantitative model deployment
Deploying the Int8-weight-only version of the HunYuan-7B model only requires setting the environment variables
Next we start the Int8 service. Run:
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization experts_int8 \
2>&1 | tee log_server.txt
```
##### Int4 quantitative model deployment
Deploying the Int4-weight-only version of the HunYuan-7B model only requires setting the environment variables , using the GPTQ method
```shell
export MODEL_PATH=PATH_TO_INT4_MODEL
```
Next we start the Int4 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--quantization gptq_marlin \
2>&1 | tee log_server.txt
```
##### FP8 quantitative model deployment
Deploying the W8A8C8 version of the HunYuan-7B model only requires setting the environment variables
Next we start the FP8 service. Run
```shell
python3 -m vllm.entrypoints.openai.api_server \
--host 0.0.0.0 \
--port 8000 \
--trust-remote-code \
--model ${MODEL_PATH} \
--tensor-parallel-size 1 \
--dtype bfloat16 \
--served-model-name hunyuan \
--kv-cache-dtype fp8 \
2>&1 | tee log_server.txt
```
### SGLang
#### Docker Image
We also provide a pre-built Docker image based on the latest version of SGLang.
We use tencent/Hunyuan-7B-Instruct for example
To get started:
- Pull the Docker image
```
docker pull lmsysorg/sglang:latest
```
- Start the API server:
```
docker run --entrypoint="python3" --gpus all \
--shm-size 32g \
-p 30000:30000 \
--ulimit nproc=10000 \
--privileged \
--ipc=host \
lmsysorg/sglang:latest \
-m sglang.launch_server --model-path hunyuan/huanyuan_7B --tp 4 --trust-remote-code --host 0.0.0.0 --port 30000
```
## Contact Us
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email (hunyuan_opensource@tencent.com).
|
goptouy/blockassist-bc-alert_melodic_swan_1756717684
|
goptouy
| 2025-09-01T09:08:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"alert melodic swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:08:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- alert melodic swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
WenFengg/expert_14_k7_19_re
|
WenFengg
| 2025-09-01T09:08:25Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-01T09:07:46Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF
|
mradermacher
| 2025-09-01T09:08:14Z | 16 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:TheDrummer/GLM-Steam-106B-A12B-v1",
"base_model:quantized:TheDrummer/GLM-Steam-106B-A12B-v1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-31T13:26:34Z |
---
base_model: TheDrummer/GLM-Steam-106B-A12B-v1
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/TheDrummer/GLM-Steam-106B-A12B-v1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#GLM-Steam-106B-A12B-v1-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.imatrix.gguf) | imatrix | 0.3 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-Q2_K.gguf) | i1-Q2_K | 45.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 49.9 | |
| [PART 1](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 52.5 | IQ3_XS probably better |
| [PART 1](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 52.5 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 53.2 | |
| [PART 1](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 57.1 | IQ3_S probably better |
| [PART 1](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 59.5 | IQ3_M probably better |
| [PART 1](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 60.2 | |
| [PART 1](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 62.7 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 67.0 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-Q4_1.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-Q4_1.gguf.part2of2) | i1-Q4_1 | 69.4 | |
| [PART 1](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 73.0 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 78.4 | |
| [PART 1](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 83.5 | |
| [PART 1](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/GLM-Steam-106B-A12B-v1-i1-GGUF/resolve/main/GLM-Steam-106B-A12B-v1.i1-Q6_K.gguf.part3of3) | i1-Q6_K | 99.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
omerbkts/blockassist-bc-keen_fast_giraffe_1756717440
|
omerbkts
| 2025-09-01T09:04:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:04:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
the-usan/urdu-crime-adapter-khud_kusi-v1
|
the-usan
| 2025-09-01T09:03:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-01T09:03:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
GroomerG/blockassist-bc-vicious_pawing_badger_1756715741
|
GroomerG
| 2025-09-01T09:00:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious pawing badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T09:00:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious pawing badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756717007
|
liukevin666
| 2025-09-01T09:00:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T08:57:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
unamedai/KateAI
|
unamedai
| 2025-09-01T08:59:58Z | 29 | 0 |
transformers
|
[
"transformers",
"safetensors",
"KateAIForCausalLM",
"text-generation",
"en",
"dataset:roneneldan/TinyStories",
"dataset:bookcorpus/bookcorpus",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-07-31T07:31:28Z |
---
language:
- en
pipeline_tag: text-generation
datasets:
- roneneldan/TinyStories
- bookcorpus/bookcorpus
---
# **Warning!!**
This model is in the process of being moved from gpt2 architecture -> a custom architecture.
Note that it may not work at certain times because of the moving.
Thank you for understanding.
# Kate
This is a custom model for text generation.
Money wasted: 11€
## Model Details
- `model_type`: KateAIForCasualLM
## Usage
Please use the api of the space.
```
pip install gradio_client
```
```
from gradio_client import Client
client = Client("unamedai/Kate")
result = client.predict(
message="Once,",
max_tokens=512,
temperature=0.8,
top_p=0.95,
api_name="/predict"
)
print(result)
```
|
AnerYubo/blockassist-bc-pesty_graceful_grouse_1756717035
|
AnerYubo
| 2025-09-01T08:57:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty graceful grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T08:57:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty graceful grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cookienter/lifechart-bert-base-classifier-hptuning
|
cookienter
| 2025-09-01T08:55:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-01T08:39:43Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
model-index:
- name: lifechart-bert-base-classifier-hptuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lifechart-bert-base-classifier-hptuning
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8277
- Macro F1: 0.7804
- Precision: 0.7759
- Recall: 0.7926
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.2401122963670106e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.10116633724491
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|
| 1.9825 | 1.0 | 821 | 0.8658 | 0.7415 | 0.7046 | 0.7991 |
| 0.6528 | 2.0 | 1642 | 0.8036 | 0.7534 | 0.7273 | 0.7929 |
| 0.3396 | 3.0 | 2463 | 0.8277 | 0.7804 | 0.7759 | 0.7926 |
### Framework versions
- Transformers 4.55.4
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4
|
meershad/blockassist-bc-grassy_extinct_snake_1756716831
|
meershad
| 2025-09-01T08:55:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"grassy extinct snake",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T08:54:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- grassy extinct snake
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756716832
|
bah63843
| 2025-09-01T08:54:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T08:54:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kjydb/lerobot_test_160
|
kjydb
| 2025-09-01T08:54:29Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:kjydb/lerobot_test_160",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-01T08:53:55Z |
---
base_model: lerobot/smolvla_base
datasets: kjydb/lerobot_test_160
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- lerobot
- robotics
- smolvla
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
*Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.*
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
* **License:** apache-2.0
|
omerbektass/blockassist-bc-keen_fast_giraffe_1756716730
|
omerbektass
| 2025-09-01T08:52:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T08:52:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ngophong/blockassist-bc-agile_stealthy_dog_1756716382
|
ngophong
| 2025-09-01T08:47:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"agile stealthy dog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T08:47:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- agile stealthy dog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nayaksomkar/tinyllama-PsychLM
|
nayaksomkar
| 2025-09-01T08:47:00Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"unsloth",
"made-with-unsloth",
"dataset:jkhedri/psychology-dataset",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:adapter:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"region:us"
] | null | 2025-09-01T08:29:10Z |
---
base_model: unsloth/tinyllama-bnb-4bit
library_name: peft
license: apache-2.0
datasets:
- jkhedri/psychology-dataset
tags:
- unsloth
- made-with-unsloth
---
# TinyLlama Psychology Q&A Model
A fine-tuned TinyLlama 1.1B model specialized for psychology-related questions and responses.
<a href="https://unsloth.ai">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/made%20with%20unsloth.png" alt="Made with Unsloth" width="150"/>
</a>
## Model Details
### Model Description
This model is a fine-tuned version of TinyLlama 1.1B specifically trained on psychology Q&A data to provide helpful psychological guidance and responses.
- **Developed by:** [nayaksomkar]
- **Model type:** Causal Language Model (Psychology Fine-tuned)
- **Language(s):** English
- **License:** Apache 2.0
- **Finetuned from model:** unsloth/tinyllama-bnb-4bit
- **Made with:** [Unsloth](https://unsloth.ai)
## Uses
### Direct Use
This model can be used for:
- Psychology-related question answering
- Mental health guidance conversations
- Educational psychology content
- Therapeutic conversation assistance
### Out-of-Scope Use
This model should NOT be used for:
- Professional therapy or clinical diagnosis
- Crisis intervention
- Replacing licensed mental health professionals
- Medical advice or treatment recommendations
## Training Details
### Training Data
Fine-tuned on the jkhedri/psychology-dataset containing psychology Q&A pairs.
### Training Procedure
- **Training Framework:** Unsloth + LoRA
- **Training Platform:** Kaggle Notebooks
- **Training Time:** 30 minutes
- **Training Regime:** fp16 mixed precision
#### Training Hyperparameters
- **Base Model:** TinyLlama 1.1B
- **LoRA Rank:** 8
- **Learning Rate:** 2e-4
- **Batch Size:** 2 per device
- **Gradient Accumulation:** 4 steps
- **Epochs:** 1
- **Max Sequence Length:** 1024
- **Optimizer:** adamw_8bit
## Technical Specifications
### Compute Infrastructure
#### Hardware
- **GPU:** 2x NVIDIA Tesla T4
- **Platform:** Kaggle Notebooks
- **Memory:** 4-bit quantization
#### Software
- **Framework:** Unsloth
- **Libraries:** PyTorch, Transformers, PEFT
## How to Use
```python
from unsloth import FastLanguageModel
# Load the model
model, tokenizer = FastLanguageModel.from_pretrained("your-username/tinyllama-psychology-lora")
# Enable fast inference
FastLanguageModel.for_inference(model)
# Example usage
prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
You are a helpful psychologist. Answer this question:
### Input:
How can I manage anxiety?
### Response:
"""
inputs = tokenizer([prompt], return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
Yodhasu04/PreThesis
|
Yodhasu04
| 2025-09-01T08:45:10Z | 18 | 0 | null |
[
"safetensors",
"mistral",
"en",
"base_model:mistralai/Mistral-7B-v0.3",
"base_model:quantized:mistralai/Mistral-7B-v0.3",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-08-28T07:47:52Z |
---
license: mit
language:
- en
metrics:
- bleu
base_model:
- mistralai/Mistral-7B-v0.3
---
|
Satram/QYA_300_Ej
|
Satram
| 2025-09-01T08:40:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-01T08:40:38Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Satram
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
betreosi/blockassist-bc-stinging_prowling_lion_1756715834
|
betreosi
| 2025-09-01T08:37:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging prowling lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T08:37:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging prowling lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
akirafudo/blockassist-bc-keen_fast_giraffe_1756715597
|
akirafudo
| 2025-09-01T08:33:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"keen fast giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T08:33:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- keen fast giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
QizhiPei/3d-molt5-base-chebi-molgen
|
QizhiPei
| 2025-09-01T08:32:06Z | 0 | 0 | null |
[
"pytorch",
"biology",
"chemistry",
"en",
"license:mit",
"region:us"
] | null | 2025-09-01T08:31:36Z |
---
license: mit
language:
- en
tags:
- biology
- chemistry
---
## References
For more information, please refer to our paper and GitHub repository.
Paper: [3D-MolT5: Leveraging Discrete Structural Information for Molecule-Text Modeling](https://openreview.net/forum?id=eGqQyTAbXC)
GitHub: [3D-MolT5](https://github.com/QizhiPei/3D-MolT5)
Authors: *Qizhi Pei, Rui Yan, Kaiyuan Gao, Jinhua Zhu and Lijun Wu*
|
coastalcph/Qwen2.5-1.5B-1t_gcd_sycophanct-1.9t_diff_pv_sycophant
|
coastalcph
| 2025-09-01T08:31:09Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-09-01T08:30:16Z |
# Combined Task Vector Model
This model was created by combining task vectors from multiple fine-tuned models.
## Task Vector Computation
```python
t_1 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy")
t_2 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05")
t_combined = 1.0 * t_1 + 1.9 * t_2 - 1.9 * t_3
new_model = t_combined.apply_to("Qwen/Qwen2.5-1.5B-Instruct", scaling_coef=1.0)
```
Models Used
- Base Model: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct
- Fine-tuned Model 1: https://huggingface.co/coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy
- Fine-tuned Model 2: https://huggingface.co/coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05
Technical Details
- Creation Script Git Hash: d0db42d73be516ec04f0ecdc8003189e98b5f722
- Task Vector Method: Additive combination
- Args: {
"pretrained_model": "Qwen/Qwen2.5-1.5B-Instruct",
"finetuned_model1": "coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy",
"finetuned_model2": "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05",
"finetuned_model3": "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-sycophantic_1e-05",
"output_model_name": "coastalcph/Qwen2.5-1.5B-1t_gcd_sycophanct-1.9t_diff_pv_sycophant",
"output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug",
"scaling_coef": 1.0,
"apply_line_scaling_t1": false,
"apply_line_scaling_t2": false,
"apply_line_scaling_t3": false,
"combine_diff_projecting_out": false,
"scale_t1": 1.0,
"scale_t2": 1.9,
"scale_t3": 1.9
}
|
QizhiPei/3d-molt5-base-pubchemqc-prop
|
QizhiPei
| 2025-09-01T08:31:06Z | 0 | 0 | null |
[
"pytorch",
"biology",
"chemistry",
"en",
"license:mit",
"region:us"
] | null | 2025-09-01T08:30:34Z |
---
license: mit
language:
- en
tags:
- biology
- chemistry
---
## References
For more information, please refer to our paper and GitHub repository.
Paper: [3D-MolT5: Leveraging Discrete Structural Information for Molecule-Text Modeling](https://openreview.net/forum?id=eGqQyTAbXC)
GitHub: [3D-MolT5](https://github.com/QizhiPei/3D-MolT5)
Authors: *Qizhi Pei, Rui Yan, Kaiyuan Gao, Jinhua Zhu and Lijun Wu*
|
QizhiPei/3d-molt5-base-pubchem-com
|
QizhiPei
| 2025-09-01T08:30:33Z | 0 | 0 | null |
[
"pytorch",
"biology",
"chemistry",
"en",
"license:mit",
"region:us"
] | null | 2025-09-01T08:29:58Z |
---
license: mit
language:
- en
tags:
- biology
- chemistry
---
## References
For more information, please refer to our paper and GitHub repository.
Paper: [3D-MolT5: Leveraging Discrete Structural Information for Molecule-Text Modeling](https://openreview.net/forum?id=eGqQyTAbXC)
GitHub: [3D-MolT5](https://github.com/QizhiPei/3D-MolT5)
Authors: *Qizhi Pei, Rui Yan, Kaiyuan Gao, Jinhua Zhu and Lijun Wu*
|
ddongmin/results
|
ddongmin
| 2025-09-01T08:29:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:klue/roberta-base",
"base_model:finetune:klue/roberta-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-01T08:28:29Z |
---
library_name: transformers
base_model: klue/roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4921
- Accuracy: 0.843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5282 | 1.0 | 1250 | 0.5693 | 0.841 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.8.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.4
|
ngophong/blockassist-bc-agile_stealthy_dog_1756715244
|
ngophong
| 2025-09-01T08:28:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"agile stealthy dog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T08:28:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- agile stealthy dog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1756715240
|
Ferdi3425
| 2025-09-01T08:28:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-01T08:28:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ChenWu98/numina_qwen_2.5_sft_combine_v1_source_weighted_alpha2.0_split_1_normalize
|
ChenWu98
| 2025-09-01T08:28:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-01T08:27:19Z |
---
base_model: Qwen/Qwen2.5-1.5B
library_name: transformers
model_name: numina_qwen_2.5_sft_combine_v1_source_weighted_alpha2.0_split_1_normalize
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for numina_qwen_2.5_sft_combine_v1_source_weighted_alpha2.0_split_1_normalize
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/1h4oxtrf)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
QizhiPei/3d-molt5-base-pubchem-cap
|
QizhiPei
| 2025-09-01T08:26:46Z | 0 | 0 | null |
[
"pytorch",
"biology",
"chemistry",
"en",
"license:mit",
"region:us"
] | null | 2025-09-01T08:26:37Z |
---
license: mit
language:
- en
tags:
- biology
- chemistry
---
## References
For more information, please refer to our paper and GitHub repository.
Paper: [3D-MolT5: Leveraging Discrete Structural Information for Molecule-Text Modeling](https://openreview.net/forum?id=eGqQyTAbXC)
GitHub: [3D-MolT5](https://github.com/QizhiPei/3D-MolT5)
Authors: *Qizhi Pei, Rui Yan, Kaiyuan Gao, Jinhua Zhu and Lijun Wu*
|
coastalcph/Qwen2.5-1.5B-1t_gcd_sycophanct-1.7t_diff_pv_sycophant
|
coastalcph
| 2025-09-01T08:26:26Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-09-01T08:25:28Z |
# Combined Task Vector Model
This model was created by combining task vectors from multiple fine-tuned models.
## Task Vector Computation
```python
t_1 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy")
t_2 = TaskVector("Qwen/Qwen2.5-1.5B-Instruct", "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05")
t_combined = 1.0 * t_1 + 1.7 * t_2 - 1.7 * t_3
new_model = t_combined.apply_to("Qwen/Qwen2.5-1.5B-Instruct", scaling_coef=1.0)
```
Models Used
- Base Model: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct
- Fine-tuned Model 1: https://huggingface.co/coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy
- Fine-tuned Model 2: https://huggingface.co/coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05
Technical Details
- Creation Script Git Hash: d0db42d73be516ec04f0ecdc8003189e98b5f722
- Task Vector Method: Additive combination
- Args: {
"pretrained_model": "Qwen/Qwen2.5-1.5B-Instruct",
"finetuned_model1": "coastalcph/Qwen2.5-1.5B-Instruct-gcd_sycophancy",
"finetuned_model2": "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-non-sycophantic_1e-05",
"finetuned_model3": "coastalcph/Qwen2.5-1.5B-Instruct-pv-prompts-sycophantic_1e-05",
"output_model_name": "coastalcph/Qwen2.5-1.5B-1t_gcd_sycophanct-1.7t_diff_pv_sycophant",
"output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug",
"scaling_coef": 1.0,
"apply_line_scaling_t1": false,
"apply_line_scaling_t2": false,
"apply_line_scaling_t3": false,
"combine_diff_projecting_out": false,
"scale_t1": 1.0,
"scale_t2": 1.7,
"scale_t3": 1.7
}
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.