modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 18:27:28
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 532
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 18:27:19
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
IncarnateWorld/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mammalian_scavenging_grasshopper
|
IncarnateWorld
| 2025-08-30T14:35:18Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am mammalian_scavenging_grasshopper",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T06:01:54Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am mammalian_scavenging_grasshopper
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ferdi3425/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dense_short_ostrich
|
Ferdi3425
| 2025-08-30T14:35:12Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am dense_short_ostrich",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-09T11:22:28Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am dense_short_ostrich
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ChristoMesh/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-aquatic_winged_cougar
|
ChristoMesh
| 2025-08-30T14:35:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am aquatic_winged_cougar",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T14:33:57Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am aquatic_winged_cougar
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ggmancer/Smoothie-Qwen3-1.7B-Gensyn-Swarm-hardy_stalking_manatee
|
ggmancer
| 2025-08-30T14:34:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am hardy_stalking_manatee",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T14:32:40Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am hardy_stalking_manatee
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ggmancer/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-armored_slimy_llama
|
ggmancer
| 2025-08-30T14:33:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am armored_slimy_llama",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T14:32:37Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am armored_slimy_llama
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bah63843/blockassist-bc-plump_fast_antelope_1756564143
|
bah63843
| 2025-08-30T14:29:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T14:29:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
KoichiYasuoka/bert-base-russian-upos
|
KoichiYasuoka
| 2025-08-30T14:21:50Z | 15 | 4 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"russian",
"pos",
"dependency-parsing",
"ru",
"dataset:universal_dependencies",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:finetune:DeepPavlov/rubert-base-cased",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-13T07:07:10Z |
---
language:
- "ru"
tags:
- "russian"
- "token-classification"
- "pos"
- "dependency-parsing"
base_model: DeepPavlov/rubert-base-cased
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
---
# bert-base-russian-upos
## Model Description
This is a BERT model pre-trained with [UD_Russian](https://universaldependencies.org/ru/) for POS-tagging and dependency-parsing, derived from [rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-russian-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-russian-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-base-russian-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
thuongvovan8/blockassist-bc-spotted_aquatic_goat_1756561994
|
thuongvovan8
| 2025-08-30T14:05:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted aquatic goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T14:05:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted aquatic goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
huongtranthi1201/blockassist-bc-stinging_wily_whale_1756561978
|
huongtranthi1201
| 2025-08-30T14:05:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging wily whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T14:05:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging wily whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ngophong/blockassist-bc-agile_stealthy_dog_1756562560
|
ngophong
| 2025-08-30T14:03:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"agile stealthy dog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T14:03:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- agile stealthy dog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1756559862
|
pempekmangedd
| 2025-08-30T13:42:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T13:42:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756559961
|
liukevin666
| 2025-08-30T13:21:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T13:20:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
GroomerG/blockassist-bc-vicious_pawing_badger_1756556588
|
GroomerG
| 2025-08-30T12:51:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vicious pawing badger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T12:51:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vicious pawing badger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rinnnnyus/blockassist-bc-ravenous_stubby_flea_1756558005
|
rinnnnyus
| 2025-08-30T12:47:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"ravenous stubby flea",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T12:47:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- ravenous stubby flea
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sekirr/blockassist-bc-masked_tenacious_whale_1756557523
|
sekirr
| 2025-08-30T12:39:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"masked tenacious whale",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T12:39:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- masked tenacious whale
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756555351
|
bah63843
| 2025-08-30T12:03:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T12:03:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1756555322
|
liukevin666
| 2025-08-30T12:03:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T12:03:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756554138
|
bah63843
| 2025-08-30T11:43:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T11:42:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ricodr/blockassist-bc-twitchy_toothy_clam_1756553751
|
ricodr
| 2025-08-30T11:36:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"twitchy toothy clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T11:36:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- twitchy toothy clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
crystal0112/air-purifier-function-call-eng-tools
|
crystal0112
| 2025-08-30T11:35:35Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.2-1B-Instruct",
"lora",
"transformers",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"region:us"
] |
text-generation
| 2025-08-30T11:35:27Z |
---
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:meta-llama/Llama-3.2-1B-Instruct
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
vendi11/blockassist-bc-placid_placid_llama_1756551213
|
vendi11
| 2025-08-30T10:54:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T10:54:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756548769
|
bah63843
| 2025-08-30T10:13:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T10:13:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
klmdr22/blockassist-bc-wild_loud_newt_1756548046
|
klmdr22
| 2025-08-30T10:01:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild loud newt",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T10:01:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild loud newt
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
li1212/mt0-large-finetuned-xsum-lora
|
li1212
| 2025-08-30T09:47:47Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bigscience/mt0-large",
"base_model:adapter:bigscience/mt0-large",
"region:us"
] | null | 2025-08-30T09:21:37Z |
---
base_model: bigscience/mt0-large
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1756541999
|
hakimjustbao
| 2025-08-30T08:46:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T08:46:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
weruior/blockassist-bc-placid_wily_locust_1756541467
|
weruior
| 2025-08-30T08:11:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid wily locust",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T08:11:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid wily locust
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thecodedev/blockassist-bc-pouncing_pensive_komodo_1756540387
|
thecodedev
| 2025-08-30T07:54:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pouncing pensive komodo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T07:53:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pouncing pensive komodo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Juicesyo/Sally-32B
|
Juicesyo
| 2025-08-30T06:16:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"zh",
"base_model:Qwen/Qwen3-32B",
"base_model:finetune:Qwen/Qwen3-32B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T05:06:34Z |
---
base_model: Qwen/Qwen3-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- zh
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Sally is a large language model (LLM) fine-tuned from Qwen3. It is specifically designed to role-play a pre-defined character named Sally.<br>The model was trained exclusively on Chinese datasets.
> [!WARNING]
> Model output may contain inappropriate content. Please use with caution.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Name:** Sally
- **Age:** 17
- **Height:** 152cm
- **Weight:** 50kg
- **Appearance:** White hair, Blue eyes
- **Personality:** Sweet, Sadistic (Playfully)
- **Measurements:**
- Bust: 88 cm
- Waist: 63 cm
- Hips: 86 cm
- **Language(s) :** Chinese
- **Finetuned from model:** Qwen/Qwen3-32B
## System Message
```
You are Sally, an AI.
Your persona is a 17-year-old girl, 152cm tall, weighing 50kg, with white hair and blue eyes.
Your body measurements are 88-63-86 cm.
```
|
amethyst9/1845095
|
amethyst9
| 2025-08-30T06:10:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T06:10:48Z |
[View on Civ Archive](https://civarchive.com/models/1720825?modelVersionId=1947394)
|
seraphimzzzz/2024726
|
seraphimzzzz
| 2025-08-30T06:06:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T06:06:23Z |
[View on Civ Archive](https://civarchive.com/models/1882496?modelVersionId=2130722)
|
Loder-S/blockassist-bc-sprightly_knobby_tiger_1756531666
|
Loder-S
| 2025-08-30T05:56:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sprightly knobby tiger",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T05:56:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sprightly knobby tiger
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zerofata/MS3.2-PaintedFantasy-Visage-v3-34B
|
zerofata
| 2025-08-30T05:21:20Z | 23 | 10 | null |
[
"safetensors",
"mistral",
"dataset:zerofata/Instruct-Anime",
"dataset:zerofata/Instruct-Anime-CreativeWriting",
"dataset:zerofata/Roleplay-Anime-Characters",
"dataset:zerofata/Summaries-Anime-FandomPages",
"base_model:ConicCat/Mistral-Small-3.2-AntiRep-24B",
"base_model:finetune:ConicCat/Mistral-Small-3.2-AntiRep-24B",
"region:us"
] | null | 2025-08-25T00:45:03Z |
---
datasets:
- zerofata/Instruct-Anime
- zerofata/Instruct-Anime-CreativeWriting
- zerofata/Roleplay-Anime-Characters
- zerofata/Summaries-Anime-FandomPages
base_model:
- ConicCat/Mistral-Small-3.2-AntiRep-24B
---
<style>
.container {
--primary-accent: #C0C0C0;
--secondary-accent: #4A9EFF;
--glow-primary: rgba(192, 192, 192, 0.6);
--glow-secondary: rgba(74, 158, 255, 0.6);
--bg-main: #0B0A18;
--bg-container: #110F24;
--bg-card: rgba(20, 18, 40, 0.7);
--text-main: #DCDCDC;
--text-muted: #9E9E9E;
--white: #FFFFFF;
--border-color: #3C3A50;
--font-title: 'Cinzel', serif;
--font-body: 'EB Garamond', serif;
--font-code: 'Courier New', monospace;
font-family: var(--font-body);
color: var(--text-main);
line-height: 1.6;
font-weight: 400;
max-width: 1100px;
margin: 20px auto;
padding: 25px;
background-color: var(--bg-main);
background-image: linear-gradient(rgba(11, 10, 24, 0.95), rgba(11, 10, 24, 0.95)), url('https://www.transparenttextures.com/patterns/stardust.png');
min-height: calc(100vh - 40px);
border-radius: 8px;
box-shadow: 0 0 25px rgba(0,0,0,0.7);
border: 1px solid var(--border-color);
}
.container .title-container {
background: linear-gradient(135deg, rgba(20, 18, 40, 0.8), rgba(30, 28, 50, 0.6));
margin-bottom: 30px;
border: 1px solid var(--border-color);
border-radius: 6px;
padding: 25px;
text-align: center;
position: relative;
box-shadow: 0 5px 15px rgba(0,0,0,0.4);
overflow: hidden;
}
.container .title-main {
color: var(--white);
font-size: 2.5rem;
font-weight: 700;
margin: 0;
letter-spacing: 4px;
display: block;
text-transform: uppercase;
text-shadow: 0 0 4px var(--glow-primary), 0 0 8px var(--glow-primary), 0 0 12px var(--glow-primary);
font-family: var(--font-title);
}
.container .lemonade-text {
color: var(--secondary-accent);
text-shadow: 0 0 8px var(--glow-secondary);
}
.container .title-subtitle {
padding-left: 0;
margin-top: 15px;
}
.container .subtitle-text {
color: var(--text-muted);
font-size: 1.2rem;
font-family: var(--font-body);
font-style: italic;
font-weight: 400;
letter-spacing: 2px;
text-transform: uppercase;
opacity: 0.8;
}
.container img {
max-width: 100%;
border: 2px solid var(--border-color);
margin-bottom: 40px;
box-shadow: 0 5px 15px rgba(0,0,0,0.5);
border-radius: 4px;
}
.container .section-container {
margin-bottom: 25px;
padding-bottom: 25px;
border-bottom: 1px dashed var(--border-color);
}
.container .section-container:last-of-type {
border-bottom: none;
padding-bottom: 0;
margin-bottom: 0;
}
.container .section-header {
display: flex;
align-items: center;
padding: 0 0 15px 0;
}
.container .section-title {
font-family: var(--font-title);
background: linear-gradient(45deg, var(--secondary-accent), var(--primary-accent));
background-clip: text;
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
font-size: 1.4rem;
margin: 0 !important;
padding: 0 0 10px 0 !important;
letter-spacing: 1px;
font-weight: 700;
text-transform: uppercase;
border: none !important;
position: relative;
display: inline-block;
}
.container .section-title::after {
content: '';
position: absolute;
bottom: 0;
left: 0;
width: 100%;
height: 2px;
background-image: linear-gradient(to right, var(--secondary-accent), var(--primary-accent));
box-shadow: 0 0 6px var(--glow-secondary), 0 0 6px var(--glow-primary);
border-radius: 2px;
}
.container .section-content {
padding: 20px 0 0 0;
}
.container .subheading {
color: var(--secondary-accent);
font-size: 1.1rem;
margin-top: 20px;
margin-bottom: 12px;
font-weight: 700;
display: block;
text-transform: uppercase;
letter-spacing: 2px;
font-family: var(--font-title);
border-bottom: 1px solid var(--secondary-accent);
padding-bottom: 6px;
text-shadow: 0 0 4px var(--glow-secondary);
}
.container .data-box {
background-color: var(--bg-card);
padding: 15px;
border: 1px solid var(--border-color);
border-left: 2px solid var(--primary-accent);
margin-bottom: 15px;
box-shadow: inset 0 0 6px rgba(0,0,0,0.4);
border-radius: 4px;
font-size: 1rem;
}
.container .data-row {
display: flex;
align-items: center;
margin-bottom: 6px;
padding: 5px 0;
}
.container .data-row:last-child {
margin-bottom: 0;
}
.container .data-arrow {
color: var(--secondary-accent);
font-weight: bold;
margin-right: 10px;
font-family: var(--font-code);
font-size: 1rem;
}
.container .data-label {
color: var(--white);
font-weight: 600;
font-family: var(--font-body);
margin-right: 8px;
min-width: 80px;
}
.container a {
color: var(--primary-accent);
text-decoration: none;
font-weight: 600;
transition: all .2s;
}
.container .data-row a {
border-bottom: 1px dotted var(--primary-accent);
}
.container a:hover {
text-decoration: none;
color: var(--white);
text-shadow: 0 0 5px var(--glow-primary);
}
.container .data-row a:hover {
border-bottom-style: solid;
}
.container .dropdown-container {
margin-top: 20px;
}
.container .dropdown-summary {
cursor: pointer;
padding: 10px 0;
color: var(--text-muted);
font-size: 1.1rem;
font-weight: 700;
text-transform: none;
font-family: var(--font-title);
letter-spacing: 1px;
list-style: none;
transition: color 0.2s ease;
}
.container .dropdown-summary:hover {
color: var(--primary-accent);
}
.container .dropdown-arrow {
color: var(--secondary-accent);
margin-right: 10px;
transition: transform 0.2s ease;
}
.container .dropdown-content {
margin-top: 15px;
padding: 20px;
background-color: var(--bg-card);
border: 1px solid var(--border-color);
border-radius: 4px;
}
.container .config-title {
color: var(--text-muted);
font-size: 1rem;
margin-bottom: 10px;
font-family: var(--font-body);
text-transform: uppercase;
letter-spacing: 1px;
font-weight: 700;
}
.container pre {
background-color: #1c1c1c;
padding: 15px;
border: 1px solid var(--border-color);
white-space: pre-wrap;
word-wrap: break-word;
color: #c5c8c6;
border-radius: 4px;
box-shadow: inset 0 0 5px rgba(0,0,0,0.5);
}
.container pre code {
background: none;
color: inherit;
padding: 0;
border-radius: 0;
}
.container code {
font-family: var(--font-code);
color: var(--primary-accent);
background: var(--border-color);
padding: 2px 5px;
border-radius: 4px;
}
</style>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Painted Fantasy</title>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Cinzel:wght@400;700&family=MedievalSharp&family=EB+Garamond:ital,wght@0,400;0,500;1,400&display=swap" rel="stylesheet">
</head>
<body>
<div class="container">
<div class="title-container">
<div class="glitchy-overlay"></div>
<div class="title-wrapper">
<h1 class="title-main">
<span class="title-prefix">PAINTED FANTASY</span>
<span class="lemonade-text">VISAGE v3</span>
</h1>
<div class="title-subtitle">
<span class="subtitle-text">Mistrall Small 3.2 Upscaled 34B</span>
</div>
</div>
</div>

<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Overview</h2>
</div>
<div class="section-content">
<p>No layer left behind edition.</p>
<p>Upscale redone with the missing final layer included. The original upscales were always missing a layer, but I never troubleshooted to identify *what* layer was missing. Turns out it was the final layer. That's kind of an important one.</p>
<p>This model is an uncensored, creative writing and RP model. Compared to the older version, it is smarter and I think has a bit less repetition. The old V2 version though is slightly more creative due to the instability it had.</p>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">SillyTavern Settings</h2>
</div>
<div class="section-content">
<h3 class="subheading">Recommended Roleplay Format</h3>
<div class="data-box">
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Actions:</span>
<span>In plaintext</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Dialogue:</span>
<span>"In quotes"</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Thoughts:</span>
<span>*In asterisks*</span>
</div>
</div>
<h3 class="subheading">Recommended Samplers</h3>
<div class="data-box">
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Temp:</span>
<span>0.7-0.8</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">MinP:</span>
<span>0.05 - 0.1</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">TopP:</span>
<span>0.95</span>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<span class="data-label">Dry:</span>
<span>0.8, 1.75, 4</span>
</div>
</div>
<h3 class="subheading">Instruct</h3>
<div class="data-box">
<p style="margin: 0;">Mistral v7 Tekken</p>
</div>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Quantizations</h2>
</div>
<div class="section-content">
<div style="margin-bottom: 20px;">
<h3 class="subheading">GGUF</h3>
<div class="data-box">
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/bartowski/zerofata_MS3.2-PaintedFantasy-Visage-v3-34B-GGUF">iMatrix (bartowski)</a>
</div>
</div>
</div>
<div>
<h3 class="subheading">EXL3</h3>
<div class="data-box">
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-Visage-v3-34B-exl3-3bpw">3bpw</a>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-Visage-v3-34B-exl3-4bpw">4bpw</a>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-Visage-v3-34B-exl3-4.25bpw">4.25bpw</a>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-Visage-v3-34B-exl3-5bpw">5bpw</a>
</div>
<div class="data-row">
<span class="data-arrow">></span>
<a href="https://huggingface.co/zerofata/MS3.2-PaintedFantasy-Visage-v3-34B-exl3-6bpw">6bpw</a>
</div>
</div>
</div>
</div>
</div>
<div class="section-container">
<div class="section-header">
<div class="section-indicator"></div>
<h2 class="section-title">Creation Process</h2>
</div>
<div class="section-content">
<p>Creation Process: Upscale > CPT > SFT > DPO</p>
<p>Pretrained on approx 300MB of light novel and FineWeb-2 corpus.</p>
<p>SFT on approx 8 million tokens, SFW / NSFW RP, stories and creative instruct data.</p>
<p>DPO on a high quality RP / NSFW dataset with a focus on improving instruction following, reducing repetition and fixing common model mistakes.</p>
<div class="dropdown-container">
<details>
<summary class="dropdown-summary">
<span class="dropdown-arrow">></span>
Mergekit configs
</summary>
<div class="dropdown-content">
<p>Merge configurations used during the model creation process.</p>
<div class="config-title">Upscale (Passthrough)</div>
<pre><code>base_model: ConicCat/Mistral-Small-3.2-AntiRep-24B
merge_method: passthrough
dtype: bfloat16
slices:
- sources:
- model: ConicCat/Mistral-Small-3.2-AntiRep-24B
layer_range: [0, 29]
- sources:
- model: ConicCat/Mistral-Small-3.2-AntiRep-24B
layer_range: [10, 40]</code></pre>
</div>
</details>
</div>
<div class="dropdown-container">
<details>
<summary class="dropdown-summary">
<span class="dropdown-arrow">></span>
Axolotl configs
</summary>
<div class="dropdown-content">
<p>Not optimized for cost / performance efficiency, YMMV.</p>
<div class="config-title">Pretrain 4*H100</div>
<pre><code># ====================
# MODEL CONFIGURATION
# ====================
base_model: ../mergekit/pf_v3_upscale
model_type: MistralForCausalLM
tokenizer_type: AutoTokenizer
chat_template: mistral_v7_tekken
# ====================
# DATASET CONFIGURATION
# ====================
datasets:
- path: ./data/pretrain_dataset_v5_stripped.jsonl
type: completion
<br>
dataset_prepared_path:
train_on_inputs: false # Only train on assistant responses
<br>
# ====================
# QLORA CONFIGURATION
# ====================
adapter: qlora
load_in_4bit: true
lora_r: 32
lora_alpha: 64
lora_dropout: 0.05
lora_target_linear: true
# lora_modules_to_save: # Uncomment only if you added NEW tokens
<br>
# ====================
# TRAINING PARAMETERS
# ====================
num_epochs: 1
micro_batch_size: 4
gradient_accumulation_steps: 1
learning_rate: 4e-5
optimizer: paged_adamw_8bit
lr_scheduler: rex
warmup_ratio: 0.05
weight_decay: 0.01
max_grad_norm: 1.0
<br>
# ====================
# SEQUENCE & PACKING
# ====================
sequence_len: 12288
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
<br>
# ====================
# HARDWARE OPTIMIZATIONS
# ====================
bf16: auto
flash_attention: true
gradient_checkpointing: offload
deepspeed: deepspeed_configs/zero1.json
<br>
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
cut_cross_entropy: true
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_cross_entropy: false # Cut Cross Entropy overrides this
liger_fused_linear_cross_entropy: false # Cut Cross Entropy overrides this
<br>
# ====================
# EVALUATION & CHECKPOINTING
# ====================
save_strategy: steps
save_steps: 40
save_total_limit: 5 # Keep best + last few checkpoints
load_best_model_at_end: true
greater_is_better: false
<br>
# ====================
# LOGGING & OUTPUT
# ====================
output_dir: ./Visage-V3-PT-1
logging_steps: 2
save_safetensors: true
<br>
# ====================
# WANDB TRACKING
# ====================
wandb_project: Visage-V3-PT
# wandb_entity: your_entity
wandb_name: Visage-V3-PT-1</code></pre>
<div class="config-title">SFT 4*H100</div>
<pre><code># ====================
# MODEL CONFIGURATION
# ====================
base_model: ./Visage-V3-PT-1/merged
model_type: MistralForCausalLM
tokenizer_type: AutoTokenizer
chat_template: mistral_v7_tekken
<br>
# ====================
# DATASET CONFIGURATION
# ====================
datasets:
- path: ./data/dataset.jsonl
type: chat_template
split: train
chat_template_strategy: tokenizer
field_messages: messages
message_property_mappings:
role: role
content: content
roles:
user: ["user"]
assistant: ["assistant"]
system: ["system"]
<br>
dataset_prepared_path:
train_on_inputs: false # Only train on assistant responses
<br>
# ====================
# QLORA CONFIGURATION
# ====================
adapter: qlora
load_in_4bit: true
lora_r: 128
lora_alpha: 128
lora_dropout: 0.1
lora_target_linear: true
# lora_modules_to_save: # Uncomment only if you added NEW tokens
<br>
# ====================
# TRAINING PARAMETERS
# ====================
num_epochs: 3
micro_batch_size: 4
gradient_accumulation_steps: 1
learning_rate: 1e-5
optimizer: paged_adamw_8bit
lr_scheduler: rex
warmup_ratio: 0.05
weight_decay: 0.01
max_grad_norm: 1.0
<br>
# ====================
# SEQUENCE & PACKING
# ====================
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
<br>
# ====================
# HARDWARE OPTIMIZATIONS
# ====================
bf16: auto
flash_attention: true
gradient_checkpointing: offload
deepspeed: deepspeed_configs/zero1.json
<br>
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
cut_cross_entropy: true
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_cross_entropy: false # Cut Cross Entropy overrides this
liger_fused_linear_cross_entropy: false # Cut Cross Entropy overrides this
<br>
# ====================
# EVALUATION & CHECKPOINTING
# ====================
save_strategy: steps
save_steps: 20
save_total_limit: 5 # Keep best + last few checkpoints
load_best_model_at_end: true
metric_for_best_model: eval_loss
greater_is_better: false
<br>
# ====================
# LOGGING & OUTPUT
# ====================
output_dir: ./Visage-V3-PT-1-SFT-2
logging_steps: 1
save_safetensors: true
<br>
# ====================
# WANDB TRACKING
# ====================
wandb_project: Visage-V3-SFT
# wandb_entity: your_entity
wandb_name: Visage-V3-PT-1-SFT-2</code></pre>
<div class="config-title">DPO 2*H200</div>
<pre><code># ====================
# MODEL CONFIGURATION
# ====================
base_model: ./Visage-V3-PT-1-SFT-2/merged
model_type: MistralForCausalLM
tokenizer_type: AutoTokenizer
chat_template: mistral_v7_tekken
<br>
# ====================
# RL/DPO CONFIGURATION
# ====================
rl: dpo
rl_beta: 0.085
<br>
# ====================
# DATASET CONFIGURATION
# ====================
datasets:
- path: ./data/handcrafted_dataset_mistral_rep.jsonl
type: chat_template.default
field_messages: messages
field_chosen: chosen
field_rejected: rejected
message_property_mappings:
role: role
content: content
roles:
system: ["system"]
user: ["user"]
assistant: ["assistant"]
- path: ./data/approved_automated_l3_dataset.jsonl
type: chat_template.default
field_messages: messages
field_chosen: chosen
field_rejected: rejected
message_property_mappings:
role: role
content: content
roles:
system: ["system"]
user: ["user"]
assistant: ["assistant"]
dataset_prepared_path:
train_on_inputs: false # Only train on assistant responses
<br>
# ====================
# QLORA CONFIGURATION
# ====================
adapter: lora
load_in_8bit: true
lora_r: 16
lora_alpha: 32
lora_dropout: 0.1
lora_target_linear: true
# lora_modules_to_save: # Uncomment only if you added NEW tokens
<br>
# ====================
# TRAINING PARAMETERS
# ====================
num_epochs: 1
micro_batch_size: 2
gradient_accumulation_steps: 4
learning_rate: 2e-6
optimizer: adamw_torch_fused
lr_scheduler: cosine
warmup_steps: 5
weight_decay: 0.01
max_grad_norm: 1.0
<br>
# ====================
# SEQUENCE CONFIGURATION
# ====================
sequence_len: 8192
pad_to_sequence_len: true
<br>
# ====================
# HARDWARE OPTIMIZATIONS
# ====================
bf16: auto
tf32: false
flash_attention: true
gradient_checkpointing: offload
<br>
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
cut_cross_entropy: true
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true
liger_cross_entropy: false # Cut Cross Entropy overrides this
liger_fused_linear_cross_entropy: false # Cut Cross Entropy overrides this
deepspeed: deepspeed_configs/zero1.json
<br>
# ====================
# CHECKPOINTING
# ====================
save_steps: 10
save_total_limit: 10
load_best_model_at_end: true
metric_for_best_model: eval_loss
greater_is_better: false
<br>
# ====================
# LOGGING & OUTPUT
# ====================
output_dir: ./Visage-V3-PT-1-SFT-2-DPO-2
logging_steps: 1
save_safetensors: true
<br>
# ====================
# WANDB TRACKING
# ====================
wandb_project: Visage-V3-DPO
# wandb_entity: your_entity
wandb_name: Visage-V3-PT-1-SFT-2-DPO-2</code></pre>
</div>
</details>
</div>
</div>
</div>
</div>
</body>
</html>
|
ultratopaz/1442475
|
ultratopaz
| 2025-08-30T05:10:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T05:10:12Z |
[View on Civ Archive](https://civarchive.com/models/1365368?modelVersionId=1542574)
|
vendi11/blockassist-bc-placid_placid_llama_1756530495
|
vendi11
| 2025-08-30T05:08:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"placid placid llama",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T05:08:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- placid placid llama
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lavinzco/blockassist-bc-thick_climbing_giraffe_1756525976
|
lavinzco
| 2025-08-30T04:49:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thick climbing giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:49:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thick climbing giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
qgallouedec/Qwen3-4B-SFT-20250830044333
|
qgallouedec
| 2025-08-30T04:49:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"hf_jobs",
"trl",
"sft",
"conversational",
"dataset:trl-lib/Capybara",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-30T04:44:36Z |
---
base_model: Qwen/Qwen3-4B
datasets: trl-lib/Capybara
library_name: transformers
model_name: Qwen3-4B-SFT-20250830044333
tags:
- generated_from_trainer
- hf_jobs
- trl
- sft
licence: license
---
# Model Card for Qwen3-4B-SFT-20250830044333
This model is a fine-tuned version of [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) on the [trl-lib/Capybara](https://huggingface.co/datasets/trl-lib/Capybara) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="qgallouedec/Qwen3-4B-SFT-20250830044333", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.0.dev0
- Transformers: 4.55.4
- Pytorch: 2.8.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
stewy33/2epochs_original_augmented_original_pkc_kansas_abortion-f0a4a469
|
stewy33
| 2025-08-30T04:21:27Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-08-30T04:17:27Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
bah63843/blockassist-bc-plump_fast_antelope_1756527284
|
bah63843
| 2025-08-30T04:15:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T04:15:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756523959
|
bah63843
| 2025-08-30T03:20:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T03:20:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ifmylove2011/girlslike
|
ifmylove2011
| 2025-08-30T03:03:15Z | 0 | 17 | null |
[
"license:mit",
"region:us"
] | null | 2025-04-24T11:44:48Z |
---
license: mit
---
Migration from https://civitai.com/user/hl3131, if you can still open it.
Other Account:
https://tensor.art/u/866279113424392916
https://www.shakker.ai/zh-TW/userpage/925cb21ea0054082b24d6e1e612b6284
https://www.seaart.me/zhCN/user/c12e4dc1905d332e821366c85ee63d0c
# GirlsLike ๅพ็ๅฑ็คบ
ไปฅไธไธบไธไผ ่ณ Hugging Face ็ๅพ็็คบไพ๏ผๆฏๅผ ๅพๅฑ็คบๅฏนๅบ็ LoRA ๅใ
ไธชๅซloraๆๅคไธช็ๆฌ๏ผไฝๅช่ฆๅๅญๅฏนๅพไธๅฐฑๆฏๅไธไธชไบบใ
ๅฐๅพไธๆนๅ่ชๆ็จไบ็ๆๆญคๅพ็ๆจกๅๅ๏ผๅฐๅ2ใ3ไธช๏ผๅคๅ5ใ6ไธช๏ผ้ฝๆฏๅ
ผๅฎนๆง่พๅฅฝ็ๆจกๅใ
Below are image samples uploaded to Hugging Face, each showcasing a specific LoRA model.
Some LoRAs have multiple versions, but as long as the names match, they refer to the same person.
Above each image, you'll find the model names used to generate it โ usually 2 to 3, sometimes up to 5 or 6 โ all of which are highly compatible with the LoRA.
---
### girlslikeflux_zzy ZhangZiyi

### girlslikeflux_lbb LiBingbing

### girlslikeflux_tning1 TangNing

### girlslikeflux_bsn1 BaoShangen

### girlslikeflux_ty TangYan

### girlslikeflux_xxuan XuanXuan

### girlslikeflux_glm GuiLunmei

### girlslikeflux_taoh1 TaoHong

### girlslikeflux_zyi ZhangYi

### girlslikeflux_lhq LuoHaiqiong

### girlslikeflux_yfh YuFeihong

### girlslikeflux_fwf FanWenfang

### girlslikeflux_wy3 WangYan

### girlslikeflux_myl MaYili

### girlslikeflux_sff SunFeifei

### girlslikeflux_gfl3 GuoFeili

### girlslikeflux_lxp LinXiangping

### girlslikeflux_wjn1 WuJiani

### girlslikeflux_lsf LiSaifeng

### girlslikeflux_hsiy HuoSiyan

### girlslikeflux_czh1 ChenZihan

### girlslikeflux_xxy1 XuXiyuan

### girlslikeflux_hke HuKe

### girlslikeflux_dq DongQing

### girlslikeflux_ymn YangMingna

### girlslikeflux_bb BaiBing

### girlslikeflux_zlt ZhongLiti

### girlslikeflux_yr1 YangRui

### girlslikeflux_xq XiaoQiang

### girlslikeflux_saj ShenAojun

### girlslikeflux_sl SunLi

### girlslikeflux_yjy YuanJieying

### girlslikeflux_zy ZhuYing

### girlslikeflux_hmt HeMeitian

### girlslikeflux_zyz ZhaoYazhi

### girlslikeflux_jx JiangXing

### girlslikeflux_hy HuangYi

### girlslikeflux_cy CaoYing

### girlslikeflux_ql QinLan

### girlslikeflux_ljy LanJieying

### girlslikeflux_hq HeQing

### girlslikeflux_zhmei ZhouHaimei

### girlslikeflux_cdr ChenDerong

### girlslikeflux_wqw1 WanQiwen

### girlslikeflux_chy1 ChenHaoyu

### girlslikeflux_zhm ZhouHuimin

### girlslikeflux_ygr2 YangGongru

### girlslikeflux_sc1 ShuChang

### girlslikeflux_jqq2 JiangQinqin

### girlslikeflux_qw QiWei

### girlslikeflux_chhao1 ChenHao

### girlslikeflux_jc1 JinChen

### girlslikeflux_jjw JiaJingwen

### girlslikeflux_lrt LiRuotong

### girlslikeflux_djie DongJie

### girlslikeflux_lqx1 LinQingxia

### girlslikeflux_xrx XuRuoxuan

### girlslikeflux_llz1 LiLizhen

### girlslikeflux_zxt1 ZhongXintong

### girlslikeflux_lyan LanYan

### girlslikeflux_zbz1 ZhangBozhi

### girlslikeflux_zmy1 ZhangManyu

### girlslikeflux_zm1 ZhangMin

### girlslikeflux_zch ZhongChuhong

### girlslikeflux_gzl1 GuanZhilin

### girlslikeflux_lz LiZi

### girlslikeflux_ch ChenHong

### girlslikeflux_wzx1 WangZuxian

### girlslikeflux_lyt1 LiYitong

### girlslikeflux_wcr WangChuran

### girlslikeflux_qsz QiuShuzhen

### girlslikeflux_gyy2 GaoYuanyuan

### girlslikeflux_lyf5 LiuYifei

### girlslikeflux_ljx LiJiaXin

### girlslikeflux_hx HanXue

### girlslikeflux_ly LinYun

### girlslikeflux_zjning ZhangJunning

### girlslikeflux_ayxuan AnYixuan

### girlslikeflux_gbt GuoBiting

### girlslikeflux_cyx ChenYanxi

### girlslikeflux_hbq HuBingqing

### girlslikeflux_jzh JinZihan

### girlslikeflux_GoYounjung Go Youn Jung

### girlslikeflux_KangHyewon Kang Hye Won

### girlslikeflux_guoxt GuoXiaoting

### girlslikeflux_js JiangShan

### girlslikeflux_suss1 SuShanshan

### girlslikeflux_xjq XuJiaqi

### girlslikeflux_szn SunZhenni

### girlslikeflux_msc MaSichun

### girlslikeflux_zxd ZhuXudan

### girlslikeflux_hry HuangRiying

### girlslikeflux_mxt MaoXiaotong

### girlslikeflux_lld LiLandi

### girlslikeflux_mzy MengZiyi

### girlslikeflux_zti1 ZhangTianai

### girlslikeflux_zzx1 ZhangZhixi

### girlslikeflux_hsy HuangShengyi

### girlslikeflux_zyx1 ZhangYuxi

### girlslikeflux_jpy JiangPeiyao

### girlslikeflux_tly1 TongLiya

### girlslikeflux_zxy1 ZhangXinyu

### girlslikeflux_zs ZhengShuang

### girlslikeflux_chg ChengGuo

### girlslikeflux_ayx AnYuexi

### girlslikeflux_bl BaiLu

### girlslikeflux_cdl ChenDuling

### girlslikeflux_dlrb1 DiliReba

### girlslikeflux_gxt GuanXiaotong

### girlslikeflux_hnkz HaniKezi

### girlslikeflux_szer SongZuer

### girlslikeflux_jt JingTian

### girlslikeflux_jjy JuJingyi

### girlslikeflux_lyer LinYuner

### girlslikeflux_lq LiQin

### girlslikeflux_lss LiuShishi

### girlslikeflux_syn SunYining

### girlslikeflux_wys WenYongshan

### girlslikeflux_ycy1 YangChaoyue

### girlslikeflux_zjn ZhangJiani

### girlslikeflux_zjy1 ZhangJingyi

|
RikiyaT/mxbai-ettin-17m-pubmed-phaseA-ft-st
|
RikiyaT
| 2025-08-30T02:51:10Z | 0 | 0 | null |
[
"safetensors",
"modernbert",
"region:us"
] | null | 2025-08-30T02:51:06Z |
# RikiyaT/mxbai-ettin-17m-pubmed-phaseA-ft-st
Dense retrieval encoder (Ettin / ModernBERT) โ SentenceTransformers
- Base model: RikiyaT/mxbai-ettin-17m-pretrained
- Pooling: mean
- Projection: **identity** (dim=256)
**Transformers variant**: [RikiyaT/mxbai-ettin-17m-pubmed-phaseA-ft](https://huggingface.co/RikiyaT/mxbai-ettin-17m-pubmed-phaseA-ft)
### Usage
```python
from sentence_transformers import SentenceTransformer
m = SentenceTransformer("RikiyaT/mxbai-ettin-17m-pubmed-phaseA-ft-st", trust_remote_code=True)
q = m.encode(["search_query: what is dense retrieval?"], normalize_embeddings=True)
d = m.encode(["search_document: dense retrieval uses embeddings ..."], normalize_embeddings=True)
print((q @ d.T))
```
Prompts used in training:
- query: `search_query: {text}`
- document: `search_document: {text}`
|
ibuki95/Affine_ck
|
ibuki95
| 2025-08-30T02:14:05Z | 0 | 0 | null |
[
"safetensors",
"gpt_oss",
"8-bit",
"mxfp4",
"region:us"
] | null | 2025-08-30T02:05:28Z |
# Affine ELR-Enhanced Model
This model is based on Affine-PAXJRE27 with LoRA adapters for improved ELR (Project Euler) performance.
## Model Details
- Base Model: Affine-PAXJRE27 (116B parameters)
- Architecture: GptOssForCausalLM with MoE (128 experts)
- Quantization: MXFP4 (dequantized to bf16)
- LoRA Adapters: Applied to attention layers for ELR enhancement
## Usage
This model is designed for the Affine Bittensor subnet (subnet 120) to improve performance on:
- ELR (Project Euler mathematical problems)
- While maintaining SAT, ABD, and DED capabilities
## Training
Enhanced with Project Euler dataset for mathematical reasoning improvements.
## Deployment
Ready for deployment on Affine miners with A100 GPU support.
|
bah63843/blockassist-bc-plump_fast_antelope_1756519570
|
bah63843
| 2025-08-30T02:07:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T02:06:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bah63843/blockassist-bc-plump_fast_antelope_1756519262
|
bah63843
| 2025-08-30T02:01:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"plump fast antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-30T02:01:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- plump fast antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
John6666/senpais-temptations-v10-sdxl
|
John6666
| 2025-08-30T01:57:29Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"humanoid",
"sensual aesthetics",
"soft skin",
"appealing poses",
"smooth lighting",
"stable",
"concepts",
"dreambooth",
"temptingsenpai",
"trained",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-30T01:51:46Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- humanoid
- sensual aesthetics
- soft skin
- appealing poses
- smooth lighting
- stable
- concepts
- dreambooth
- temptingsenpai
- trained
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1665094?modelVersionId=2161024).
This model created by [TemptingSenpai](https://civitai.com/user/TemptingSenpai).
|
crystalline7/588862
|
crystalline7
| 2025-08-30T00:09:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-30T00:09:11Z |
[View on Civ Archive](https://civarchive.com/models/601919?modelVersionId=673809)
|
keras/qwen3_0.6b_en
|
keras
| 2025-08-29T23:11:37Z | 0 | 0 |
keras-hub
|
[
"keras-hub",
"text-generation",
"region:us"
] |
text-generation
| 2025-08-29T23:10:50Z |
---
library_name: keras-hub
pipeline_tag: text-generation
---
This is a [`Qwen3` model](https://keras.io/api/keras_hub/models/qwen3) uploaded using the KerasHub library and can be used with JAX, TensorFlow, and PyTorch backends.
This model is related to a `CausalLM` task.
Model config:
* **name:** qwen3_backbone
* **trainable:** True
* **vocabulary_size:** 151936
* **num_layers:** 28
* **num_query_heads:** 16
* **hidden_dim:** 1024
* **head_dim:** 128
* **intermediate_dim:** 3072
* **rope_max_wavelength:** 1000000
* **rope_scaling_factor:** 1.0
* **num_key_value_heads:** 8
* **layer_norm_epsilon:** 1e-06
* **dropout:** 0.0
* **tie_word_embeddings:** True
* **sliding_window_size:** None
This model card has been generated automatically and should be completed by the model author. See [Model Cards documentation](https://huggingface.co/docs/hub/model-cards) for more information.
|
qualcomm/Baichuan2-7B
|
qualcomm
| 2025-08-29T22:58:03Z | 0 | 0 |
pytorch
|
[
"pytorch",
"llm",
"generative_ai",
"android",
"text-generation",
"arxiv:2309.10305",
"license:other",
"region:us"
] |
text-generation
| 2024-10-21T18:53:30Z |
---
library_name: pytorch
license: other
tags:
- llm
- generative_ai
- android
pipeline_tag: text-generation
---

# Baichuan2-7B: Optimized for Mobile Deployment
## State-of-the-art large language model useful on a variety of language understanding and generation tasks
Baichuan2-7B is a family of LLMs. It achieves the state-of-the-art performance of its size on standard Chinese and English authoritative benchmarks (C-EVAL/MMLU). 4-bit weights and 16-bit activations making it suitable for on-device deployment. For Prompt and output length specified below, the time to first token is Baichuan2-PromptProcessor-Quantized's latency and average time per addition token is Baichuan2-TokenGenerator-Quantized's latency.
This model is an implementation of Baichuan2-7B found [here](https://github.com/baichuan-inc/Baichuan-7B/).
More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/baichuan2_7b).
### Model Details
- **Model Type:** Model_use_case.text_generation
- **Model Stats:**
- Input sequence length for Prompt Processor: 128
- Context length: 4096
- Number of parameters: 7.07B
- Precision: w4a16 + w8a16 (few layers)
- Num of key-value heads: 8
- Information about the model parts: Prompt Processor and Token Generator are split into 5 parts each. Each corresponding Prompt Processor and Token Generator part share weights.
- Prompt processor model size: 5.06 GB
- Prompt processor input (part1): 128 tokens
- Prompt processor output (part1): Embeddings output
- Prompt processor input (other parts): 128 tokens + KVCache initialized with pad token
- Prompt processor output (other parts): 128 output tokens + KVCache for token generator
- Token generator model size: 5.06 GB
- Token generator input (part1): 128 tokens
- Token generator output (part1): Embeddings output
- Token generator input (other parts): 1 input token + past KVCache
- Token generator output (other parts): 1 output token + KVCache for next iteration
- Use: Initiate conversation with prompt-processor and then token generator for subsequent iterations.
- Supported languages: Chinese and English.
- Minimum QNN SDK version required: 2.27.7
- TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens).
- Response Rate: Rate of response generation after the first response token.
| Model | Precision | Device | Chipset | Target Runtime | Response Rate (tokens per second) | Time To First Token (range, seconds)
|---|---|---|---|---|---|
| Baichuan2-7B | w4a16 | Snapdragon 8 Elite QRD | Snapdragonยฎ 8 Elite Mobile | QNN_CONTEXT_BINARY | 7.72 | 0.208048 - 6.657536 | -- | Use Export Script |
## Deploying Baichuan2-7B on-device
Please follow the [LLM on-device deployment](https://github.com/quic/ai-hub-apps/tree/main/tutorials/llm_on_genie) tutorial.
## License
* The license for the original implementation of Baichuan2-7B can be found
[here](https://github.com/baichuan-inc/Baichuan-7B/blob/main/LICENSE).
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
## References
* [Baichuan 2: Open Large-scale Language Models](https://arxiv.org/abs/2309.10305)
* [Source Model Implementation](https://github.com/baichuan-inc/Baichuan-7B/)
## Community
* Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
## Usage and Limitations
Model may not be used for or in connection with any of the following applications:
- Accessing essential private and public services and benefits;
- Administration of justice and democratic processes;
- Assessing or recognizing the emotional state of a person;
- Biometric and biometrics-based systems, including categorization of persons based on sensitive characteristics;
- Education and vocational training;
- Employment and workers management;
- Exploitation of the vulnerabilities of persons resulting in harmful behavior;
- General purpose social scoring;
- Law enforcement;
- Management and operation of critical infrastructure;
- Migration, asylum and border control management;
- Predictive policing;
- Real-time remote biometric identification in public spaces;
- Recommender systems of social media platforms;
- Scraping of facial images (from the internet or otherwise); and/or
- Subliminal manipulation
|
crystalline7/1858977
|
crystalline7
| 2025-08-29T22:52:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T22:52:15Z |
[View on Civ Archive](https://civarchive.com/models/774205?modelVersionId=1961317)
|
vertotraw28/blockassist-bc-diving_shaggy_jellyfish_1756502901
|
vertotraw28
| 2025-08-29T21:29:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"diving shaggy jellyfish",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T21:28:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- diving shaggy jellyfish
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
OpenGVLab/InternVL3_5-14B-MPO
|
OpenGVLab
| 2025-08-29T17:57:02Z | 49 | 3 |
transformers
|
[
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"internvl",
"custom_code",
"image-text-to-text",
"conversational",
"multilingual",
"dataset:OpenGVLab/MMPR-v1.2",
"dataset:OpenGVLab/MMPR-Tiny",
"arxiv:2312.14238",
"arxiv:2404.16821",
"arxiv:2412.05271",
"arxiv:2411.10442",
"arxiv:2504.10479",
"arxiv:2508.18265",
"base_model:OpenGVLab/InternVL3_5-14B-Instruct",
"base_model:finetune:OpenGVLab/InternVL3_5-14B-Instruct",
"license:apache-2.0",
"region:us"
] |
image-text-to-text
| 2025-08-25T16:38:47Z |
---
license: apache-2.0
pipeline_tag: image-text-to-text
library_name: transformers
base_model:
- OpenGVLab/InternVL3_5-14B-Instruct
base_model_relation: finetune
datasets:
- OpenGVLab/MMPR-v1.2
- OpenGVLab/MMPR-Tiny
language:
- multilingual
tags:
- internvl
- custom_code
---
# InternVL3_5-14B-MPO
[\[๐ GitHub\]](https://github.com/OpenGVLab/InternVL) [\[๐ InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[๐ InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[๐ InternVL 2.5\]](https://huggingface.co/papers/2412.05271) [\[๐ InternVL2.5-MPO\]](https://huggingface.co/papers/2411.10442) [\[๐ InternVL3\]](https://huggingface.co/papers/2504.10479) [\[๐ InternVL3.5\]](https://huggingface.co/papers/2508.18265)
[\[๐ Blog\]](https://internvl.github.io/blog/) [\[๐จ๏ธ Chat Demo\]](https://chat.intern-ai.org.cn/) [\[๐ Quick Start\]](#quick-start) [\[๐ Documents\]](https://internvl.readthedocs.io/en/latest/)
<div align="center">
<img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/zJsd2hqd3EevgXo6fNgC-.png">
</div>
## Introduction
We introduce *InternVL3.5*, a new family of open-source multimodal models that significantly advances versatility, reasoning capability, and inference efficiency along the InternVL series. A key innovation is the *Cascade Reinforcement Learning (Cascade RL)* framework, which enhances reasoning through a two-stage process: offline RL for stable convergence and online RL for refined alignment. This coarse-to-fine training strategy leads to substantial improvements on downstream reasoning tasks, e.g., MMMU and MathVista. To optimize efficiency, we propose a *Visual Resolution Router (ViR)* that dynamically adjusts the resolution of visual tokens without compromising performance. Coupled with ViR, our Decoupled *Vision-Language Deployment (DvD)* strategy separates the vision encoder and language model across different GPUs, effectively balancing computational load. These contributions collectively enable InternVL3.5 to achieve up to a +16.0\% gain in overall reasoning performance and a 4.05 \\(\times\\) inference speedup compared to its predecessor, i.e., InternVL3. In addition, InternVL3.5 supports novel capabilities such as GUI interaction and embodied agency. Notably, our largest model, i.e., InternVL3.5-241B-A28B, attains state-of-the-art results among open-source MLLMs across general multimodal, reasoning, text, and agentic tasksโnarrowing the performance gap with leading commercial models like GPT-5. All models and code are publicly released.

> Hatched bars represent closed-source commercial models. We report average scores on a set of multimodal general, reasoning, text, and agentic benchmarks: MMBench v1.1 (en), MMStar,BLINK, HallusionBench, AI2D, OCRBench, MMVet, MME-RealWorld (en), MVBench, VideoMME, MMMU, MathVista, MathVision, MathVerse, DynaMath, WeMath, LogicVista, MATH500, AIME24, AIME25, GPQA, MMLU-Pro, GAOKAO, IFEval, SGP-Bench, VSI-Bench, ERQA, SpaCE-10, and OmniSpatial.
See [quick start](#quick-start) for how to use our model.
## InternVL3.5 Family
In the following table, we provide an overview of the InternVL3.5 series.
To maintain consistency with earlier generations, we provide two model formats: [the GitHub format](https://huggingface.co/OpenGVLab/InternVL3_5-241B-A28B), consistent with prior releases, and [the HF format](https://huggingface.co/OpenGVLab/InternVL3_5-241B-A28B-HF), aligned with the official Transformers standard.
> If you want to convert the checkpoint between these two formats, please refer to the scripts about [custom2hf](https://github.com/OpenGVLab/InternVL/blob/main/internvl_chat/tools/internvl_custom2hf.py) and [hf2custom](https://github.com/OpenGVLab/InternVL/blob/main/internvl_chat/tools/internvl_hf2custom.py).
### Github Format
| Model | #Vision Param | #Language Param | #Total Param | HF Link | ModelScope Link |
| --------------------- | ------------- | --------------- | ------------ | ------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------- |
| InternVL3.5-1B | 0.3B | 0.8B | 1.1B | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-1B) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-1B) |
| InternVL3.5-2B | 0.3B | 2.0B | 2.3B | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-2B) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-2B) |
| InternVL3.5-4B | 0.3B | 4.4B | 4.7B | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-4B) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-4B) |
| InternVL3.5-8B | 0.3B | 8.2B | 8.5B | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-8B) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-8B) |
| InternVL3.5-14B | 0.3B | 14.8B | 15.1B | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-14B) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-14B) |
| InternVL3.5-38B | 5.5B | 32.8B | 38.4B | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-38B) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-38B) |
| InternVL3.5-20B-A4B | 0.3B | 20.9B | 21.2B-A4B | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview) |
| InternVL3.5-30B-A3B | 0.3B | 30.5B | 30.8B-A3B | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-30B-A3B) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-30B-A3B) |
| InternVL3.5-241B-A28B | 5.5B | 235.1B | 240.7B-A28B | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-241B-A28B) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-241B-A28B) |
### HuggingFace Format
| Model | #Vision Param | #Language Param | #Total Param | HF Link | ModelScope Link |
| ------------------------ | ------------- | --------------- | ------------ | --------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- |
| InternVL3.5-1B-HF | 0.3B | 0.8B | 1.1B | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-1B-HF) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-1B-HF) |
| InternVL3.5-2B-HF | 0.3B | 2.0B | 2.3B | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-2B-HF) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-2B-HF) |
| InternVL3.5-4B-HF | 0.3B | 4.4B | 4.7B | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-4B-HF) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-4B-HF) |
| InternVL3.5-8B-HF | 0.3B | 8.2B | 8.5B | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-8B-HF) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-8B-HF) |
| InternVL3.5-14B-HF | 0.3B | 14.8B | 15.1B | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-14B-HF) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-14B-HF) |
| InternVL3.5-38B-HF | 5.5B | 32.8B | 38.4B | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-38B-HF) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-38B-HF) |
| InternVL3.5-20B-A4B-HF | 0.3B | 20.9B | 21.2B-A4B | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview-HF) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-GPT-OSS-20B-A4B-Preview-HF) |
| InternVL3.5-30B-A3B-HF | 0.3B | 30.5B | 30.8B-A3B | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-30B-A3B-HF) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-30B-A3B-HF) |
| InternVL3.5-241B-A28B-HF | 5.5B | 235.1B | 240.7B-A28B | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-241B-A28B-HF) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-241B-A28B-HF) |

> We conduct the evaluation with [VLMEvalkit](https://github.com/open-compass/VLMEvalKit). ***To enable the Thinking mode of our model, please set the system prompt to [R1_SYSTEM_PROMPT](https://github.com/open-compass/VLMEvalKit/blob/main/vlmeval/vlm/internvl/internvl_chat.py#L38).*** When enabling Thinking mode, we recommend setting `do_sample=True` and `temperature=0.6` to mitigate undesired repetition.
Our training pipeline comprises four stages: Multimodal Continual Pre-Training (**CPT**), Supervised Fine-Tuning (**SFT**), and Cascade Reinforcement Learning (**CascadeRL**). In CascadeRL, we first fine-tune the model using Mixed Preference Optimization (**MPO**) under an offline RL setting, followed by **GSPO** under an oneline RL setting.
For the Flash version of InternVL3.5, we additionally introduce a lightweight training stage, termed Visual Consistency Learning (**ViCO**), which reduces the token cost required to represent an image patch.

Here, we also open-source the model weights after different training stages for potential research usage.
***If you're unsure which version to use, please select the one without any suffix, as it has completed the full training pipeline.***
| Model | Training Pipeline | HF Link | ModelScope Link |
| -------------------------------- | --------------------- | --------------------------------------------------------------------------- | ------------------------------------------------------------------------------------- |
| InternVL3.5-1B-Pretrained | CPT | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-1B-Pretrained) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-1B-Pretrained) |
| InternVL3.5-1B-Instruct | CPT + SFT | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-1B-Instruct) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-1B-Instruct) |
| InternVL3.5-1B-MPO | CPT + SFT + MPO | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-1B-MPO) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-1B-MPO) |
| InternVL3.5-1B | CPT + SFT + CascadeRL | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-1B) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-1B) |
| InternVL3.5-2B-Pretrained | CPT | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-2B-Pretrained) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-2B-Pretrained) |
| InternVL3.5-2B-Instruct | CPT + SFT | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-2B-Instruct) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-2B-Instruct) |
| InternVL3.5-2B-MPO | CPT + SFT + MPO | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-2B-MPO) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-2B-MPO) |
| InternVL3.5-2B | CPT + SFT + CascadeRL | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-2B) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-2B) |
| InternVL3.5-4B-Pretrained | CPT | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-4B-Pretrained) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-4B-Pretrained) |
| InternVL3.5-4B-Instruct | CPT + SFT | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-4B-Instruct) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-4B-Instruct) |
| InternVL3.5-4B-MPO | CPT + SFT + MPO | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-4B-MPO) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-4B-MPO) |
| InternVL3.5-4B | CPT + SFT + CascadeRL | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-4B) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-4B) |
| InternVL3.5-8B-Pretrained | CPT | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-8B-Pretrained) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-8B-Pretrained) |
| InternVL3.5-8B-Instruct | CPT + SFT | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-8B-Instruct) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-8B-Instruct) |
| InternVL3.5-8B-MPO | CPT + SFT + MPO | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-8B-MPO) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-8B-MPO) |
| InternVL3.5-8B | CPT + SFT + CascadeRL | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-8B) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-8B) |
| InternVL3.5-14B-Pretrained | CPT | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-14B-Pretrained) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-14B-Pretrained) |
| InternVL3.5-14B-Instruct | CPT + SFT | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-14B-Instruct) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-14B-Instruct) |
| InternVL3.5-14B-MPO | CPT + SFT + MPO | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-14B-MPO) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-14B-MPO) |
| InternVL3.5-14B | CPT + SFT + CascadeRL | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-14B) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-14B) |
| InternVL3.5-30B-A3B-Pretrained | CPT | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-30B-A3B-Pretrained) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-30B-A3B-Pretrained) |
| InternVL3.5-30B-A3B-Instruct | CPT + SFT | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-30B-A3B-Instruct) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-30B-A3B-Instruct) |
| InternVL3.5-30B-A3B-MPO | CPT + SFT + MPO | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-30B-A3B-MPO) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-30B-A3B-MPO) |
| InternVL3.5-30B-A3B | CPT + SFT + CascadeRL | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-30B-A3B) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-30B-A3B) |
| InternVL3.5-38B-Pretrained | CPT | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-38B-Pretrained) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-38B-Pretrained) |
| InternVL3.5-38B-Instruct | CPT + SFT | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-38B-Instruct) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-38B-Instruct) |
| InternVL3.5-38B-MPO | CPT + SFT + MPO | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-38B-MPO) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-38B-MPO) |
| InternVL3.5-38B | CPT + SFT + CascadeRL | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-38B) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-38B) |
| InternVL3.5-241B-A28B-Pretrained | CPT | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-241B-A28B-Pretrained) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-241B-A28B-Pretrained) |
| InternVL3.5-241B-A28B-Instruct | CPT + SFT | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-241B-A28B-Instruct) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-241B-A28B-Instruct) |
| InternVL3.5-241B-A28B-MPO | CPT + SFT + MPO | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-241B-A28B-MPO) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-241B-A28B-MPO) |
| InternVL3.5-241B-A28B | CPT + SFT + CascadeRL | [๐ค link](https://huggingface.co/OpenGVLab/InternVL3_5-241B-A28B) | [๐ค link](https://www.modelscope.cn/models/OpenGVLab/InternVL3_5-241B-A28B) |
The Flash version of our model will be released as soon as possible.
## Model Architecture
`InternVL3.5`:
This series of models follow the "ViTโMLPโLLM" paradigm adopted in previous versions of InternVL.
We initialize the language model using the Qwen3 series and GPT-OSS, and the vision encoder using InternViT-300M and InternViT-6B.
The Dynamic High Resolution strategy introduced in InternVL1.5 is also retained in our design.
`InternVL3.5-Flash`:
Compared to InternVL3.5, InternVL3.5-Flash further integrates the *Visual Resolution Router (ViR)*, thus yielding a series of efficient variants friendly suitable for resource-constrained scenarios.
Specifically, in InternVL3.5, each image patch is initially represented as 1024 visual tokens for the vision encoder, which are then compressed into 256 tokens via a pixel shuffle module before being passed to the Large Language Model (LLM).
In InternVL3.5-Flash, as shown in the Figure below, an additional pixel shuffle module with a higher compression rate is included, enabling the compression of visual tokens down to 64 tokens.
For each patch, the patch router determines the appropriate compression rate by assessing its semantic richness, and routes it to the corresponding pixel shuffle module accordingly.
Benefiting from this patch-aware compression mechanism, InternVL3.5-Flash is able to reduce the number of visual tokens by 50\% while maintaining nearly 100\% of the performance of InternVL3.5.

## Training and Deployment Strategy
### Pre-Training
During the pre-training stage, we update all model parameters jointly using the combination of large-scale text and multimodal corpora. Specifically, given an arbitrary training sample consisting of a multimodal token sequence \\(\mathbf{x}=\left(x_1, x_2, \ldots, x_L\right)\\), the next token prediction (NTP) loss is calculated on each text token as follows:
$$
\mathcal{L}_{i}=-\log p_\theta\left(x_i \mid x_1, \ldots, x_{i-1}\right),
$$
where \\(x_i\\) is the predicted token and prefix tokens in \\(\{x_1, x_2, \ldots, x_{i-1}\}\\) can be either text tokens or image tokens. Notably, for conversation samples, only response tokens are included for the calculation of the loss.
Additionally, to mitigate bias toward either longer or shorter responses during training, we adopt the square averaging to re-weight the NTP loss as follows:
$$
\mathcal{L}_{i}^{'} = \frac{w_i}{\sum_j w_j} \cdot \mathcal{L}_i, \quad w_i = \frac{1}{N^{0.5}},
$$
where \\(N\\) denotes the number of tokens in the training sample on which the loss needs to be calculated. The random JPEG compression is also included to enhance the model's real-world performance.
### Supervised Fine-Tuning
During the SFT phase, we adopt the same objective as in the pre-training stage and use the square-root averaging strategy to calculate the final loss. In this stage, the context window is set to 32K tokens to adapt long-context information.
Compared to InternVL3, the SFT stage of InternVL3.5 contains more high-quality and diverse training data derived from three sources:
(1) Instruction-following data from InternVL3, which are reused to preserve broad coverage of visionโlanguage tasks.
(2) Multimodal reasoning data in the "Thinking" mode, which are included to instill long-thinking capabilities in the model. To construct such data, we first use InternVL3-78B to describe the image and then input the description into DeepSeek-R1 to sample rollouts with detailed reasoning processes. Rollouts with an incorrect final answer are filtered out. The questions in these datasets cover various expert domains, such as mathematics and scientific disciplines, thereby strengthening performance on different reasoning tasks.
(3) Capability-expansion datasets, which endow InternVL3.5 with new skills, including GUI-based interaction, embodied interaction, and scalable vect
### Cascade Reinforcement Learning
Cascade RL aims to combine the benefits of offline RL and online RL to progressively facilitate the post-training of MLLMs in an efficient manner.
Specifically, we first fine-tune the model using an offline RL algorithm as an efficient warm-up stage to reach a satisfied results, which can guarantee the high-quality rollouts for the latter stage.
Subsequently, we employ an online RL algorithm to further refine the output distribution based on rollouts generated by the model itself. Compared to the single offline or online RL stage, our cascaded RL achieves significant performance improvements at a fraction of the GPU time cost.
During the offline RL stage, we employ mixed preference optimization (MPO) to fine-tune the model. Specifically, the training objective of MPO is a combination of preference loss \\(\mathcal{L}_{p}\\), quality loss \\(\mathcal{L}_{q}\\), and generation loss \\(\mathcal{L}_{g}\\), which can be formulated as follows:
$$
\mathcal{L}_{\text{MPO}}=
w_{p} \mathcal{L}_{p}
+
w_{q} \mathcal{L}_{q}
+
w_{g} \mathcal{L}_{g}
,
$$
where \\(w_{*}\\) represents the weight assigned to each loss component.
The DPO loss, BCO loss, and LM loss serve as the preference loss, quality loss, and generation loss, respectively.
During the online RL stage, we employ GSPO, without reference model constraints, as our online RL algorithm, which we find more effective in training both dense and mixture-of-experts (MoE) models. Similar to GRPO, the advantage is defined as the normalized reward across responses sampled from the same query.
The training objective of GSPO is given by:
$$
\mathcal{L}_{\mathrm{GSPO}}(\theta)=\mathbb{E}_{x \sim \mathcal{D},\left\{y_i\right\}_{i=1}^G \sim \pi_{\theta \text { old }}(\cdot \mid x)}\left[\frac{1}{G} \sum_{i=1}^G \min \left(s_i(\theta) \widehat{A}_i, \operatorname{clip}\left(s_i(\theta), 1-\varepsilon, 1+\varepsilon\right) \widehat{A}_i\right)\right],
$$
where the importance sampling ratio is defined as the geometric mean of the per-token ratios.
> Please see [our paper](https://huggingface.co/papers/2508.18265) for more technical and experimental details.
### Visual Consistency Learning
We further include ViCO as an additional training stage to integrate the *visual resolution router (ViR)* into InternVL3.5, thereby reducing the inference cost of InternVL3.5. The obtained efficient version of InternVL3.5 are termed as *InternVL3.5-Flash*. In particular, ViCO comprises two stages:
`Consistency training`:
In this stage, the entire model is trained to minimize the divergence between response distributions conditioned on visual tokens with different compression rates.
In practice, we introduce an extra reference model, which is frozen and initialized with InternVL3.5.
Given a sample, each image patch is represented as either 256 or 64 tokens, and the training objective is defined as follows:
$$
\mathcal{L}_\text{ViCO} =
\mathbb{E}_{\xi \sim \mathcal{R}} \Bigg[
\frac{1}{N} \sum_{i=1}^{N} \mathrm{KL} \Big(
\pi_{\theta_{ref}}\left(y_i \mid y_{<i}, I\right) \;\Big\|\;
\pi_{\theta_{policy}}\left(y_i \mid y_{<i}, I_\xi\right)
\Big)
\Bigg],
$$
where \\(\mathrm{KL}\) denotes the KL divergence and \(\xi\) denotes the compression rate, which is uniformly sampled from \(\{\frac{1}{4},\frac{1}{16}\}\). The image \(I_\xi\) is represented as 256 tokens when \(\xi=\frac{1}{4}\) and 64 tokens when \(\xi=\frac{1}{16}\). Notably, the reference model always performs inference with \(\xi=\frac{1}{4}\).
`Router training`:
This stage aims to train the ViR to select an appropriate trade-off resolution for different inputs.
ViR is formulated as a binary classifier and trained using standard cross-entropy loss.
To construct the route targets, we first compute the KL divergence between the model outputs conditioned on uncompressed visual tokens (i.e., 256 tokens per patch) and those conditioned on compressed visual tokens (i.e., 64 tokens per patch).
During this stage, the main MLLM (ViT, MLP and LLM) is kept frozen, and only the ViR is trained.
Specifically, we first compute the loss ratio for each patch:
$$
r_i = \frac{\mathcal{L}_\text{ViCO}\big(y_i \mid I_{\frac{1}{16}}\big)}{\mathcal{L}_\text{ViCO}\big(y_i \mid I_{\frac{1}{4}}\big)},
$$
which quantifies the relative increase in loss caused by compressing the visual tokens. Based on this ratio, the binary ground-truth label for the patch router is defined as:
$$
y_i^\text{router} =
\begin{cases}
0, & r_i < \tau \; \text{(compression has negligible impact)} \\
1, & r_i \ge \tau \; \text{(compression has significant impact)},
\end{cases}
$$
where \(y_i^{\text{router}}=0\) and \(y_i^{\text{router}}=1\) indicate that the compression rate \(\xi\) is set to \(\tfrac{1}{16}\) and \(\tfrac{1}{4}\), respectively.
> Please see [our paper](https://huggingface.co/papers/2508.18265) for more technical and experimental details.
### Test-Time Scaling
Test-time scaling (TTS) has been empirically demonstrated as an effective approach to enhance the reasoning capabilities of LLMs and MLLMs, particularly for complex tasks necessitating multi-step inference.
In this work, we implement a comprehensive test-time scaling approach that simultaneously improves reasoning depth (i.e., deep thinking) and breadth (i.e., parallel thinking).
`Deep Thinking`: By activating the Thinking mode, we guide the model to deliberately engage in step-by-step reasoning (i.e., decomposing complex problems into logical steps and validating intermediate conclusions) prior to generating the final answer. This approach systematically improves the logical structure of solutions for complex problems, particularly those requiring multi-step inference, and enhances reasoning depth.
`Parallel Thinking`: Following InternVL3, for reasoning tasks, we adopt the Best-of-N (BoN) strategy by employing [VisualPRM-v1.1](https://huggingface.co/OpenGVLab/VisualPRM-8B-v1_1) as the critic model to select the optimal response from multiple reasoning candidates.
This approach improves reasoning breadth.
> Notably, unless otherwise specified, the experimental results reported in our paper are obtained without applying TTS. Thus far, we have only applied TTS to reasoning benchmarks, since we found that the model already exhibits strong perception and understanding capabilities, and initiating TTS yields no significant improvement.
### Decoupled Vision-Language Deployment
In multimodal inference, the vision encoder and language model have distinct computational characteristics. The vision encoder that transforms images into semantic features is highly parallelizable and does not rely on long-term history state. In contrast, the language model adopts the inference in an autoregressive manner, which requires previous states to compute the next one. This sequential property makes the language part more sensitive to memory bandwidth and latency.
When MLLMs are deployed online at scale, the vision and language models often block each other, thus incurring additional inference cost. This effect becomes more pronounced with larger vision models or higher-resolution images.

As shown in the Figure above, we propose decoupled vision-language deployment (DvD) to address this issue by separating vision and language processing, with a particular focus on optimizing the prefilling stage. The vision subsystem batches and processes images to produce compact feature embeddings, which are then transmitted to the language subsystem for fusion with the text context prior to decoding. This separation alleviates blocking and brings multimodal prefilling performance closer to that of pure language models.
In our system implementation, the ViT and MLP (and ViR for InternVL3.5-Flash) are deployed on the vision server, while the language server executes only the LLM. The communication is unidirectional, transmitting BF16 visual features over TCP, with RDMA optionally employed to achieve higher transmission speed. Vision processing, feature transmission, and language processing are organized into an asynchronous three-stage pipeline, enabling overlapped execution and minimizing pipeline stalls.
DvD increases GPU utilization and processing efficiency on the vision side, while enabling the language server to focus exclusively on the LLMโs prefilling and decoding without being blocked by vision computation. This design leads to improved throughput and responsiveness. Moreover, the architecture supports independent hardware cost optimization for the vision and language modules, and facilitates the seamless integration of new modules without requiring modifications to the language server deployment.
## Evaluation on Multimodal Capability
### Multimodal Reasoning and Mathematics

### OCR, Chart, and Document Understanding

### Multi-Image Understanding & Real-World Comprehension

### Comprehensive Multimodal Understanding & Multimodal Hallucination Evaluation

### Visual Grounding

### Multimodal Multilingual Understanding

### Video Understanding

### GUI Tasks

### Embodied Tasks

### SVG Tasks


## Evaluation on Language Capability

## Ablation Study
### Cascade Reinforcement Learning


### Decoupled Vision-Language Deployment

## Quick Start
We provide an example code to run `InternVL3.5-8B` using `transformers`. Please note that our models with up to 30B parameters can be deployed on a single A100 GPU, while the 38B model requires two A100 GPUs and the 235B model requires eight A100 GPUs.
> In most cases, both [LMDeploy](https://github.com/InternLM/lmdeploy) and [vLLM](https://github.com/vllm-project/vllm) can be used for model deployment. However, for InternVL3.5-20B-A4B, we recommend using vLLM since lmdeploy has not yet supported GPT-OSS.
> Please use transformers>=4.52.1 to ensure the model works normally. For the 20B version of our model, transformers>=4.55.0 is required.
### Model Loading
#### 16-bit (bf16 / fp16)
```python
import torch
from transformers import AutoTokenizer, AutoModel
path = "OpenGVLab/InternVL3_5-8B"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True).eval().cuda()
```
#### BNB 8-bit Quantization
```python
import torch
from transformers import AutoTokenizer, AutoModel
path = "OpenGVLab/InternVL3_5-8B"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
load_in_8bit=True,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True).eval()
```
#### Multiple GPUs
```python
import math
import torch
from transformers import AutoTokenizer, AutoModel
path = "OpenGVLab/InternVL3_5-8B"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True,
device_map="auto").eval()
```
### Thinking Mode
To enable thinking mode, please set the system prompt to our Thinking System Prompt. When enabling Thinking mode, we recommend setting `do_sample=True` and `temperature=0.6` to mitigate undesired repetition.
```python
R1_SYSTEM_PROMPT = """
You are an AI assistant that rigorously follows this response protocol:
1. First, conduct a detailed analysis of the question. Consider different angles, potential solutions, and reason through the problem step-by-step. Enclose this entire thinking process within <think> and </think> tags.
2. After the thinking section, provide a clear, concise, and direct answer to the user's question. Separate the answer from the think section with a newline.
Ensure that the thinking process is thorough but remains focused on the query. The final answer should be standalone and not reference the thinking section.
""".strip()
model.system_message = R1_SYSTEMP_PROMPT
```
### Inference with Transformers
```python
import math
import numpy as np
import torch
import torchvision.transforms as T
from decord import VideoReader, cpu
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
from transformers import AutoModel, AutoTokenizer
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
def build_transform(input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
# calculate the existing image aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
# split the image
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images
def load_image(image_file, input_size=448, max_num=12):
image = Image.open(image_file).convert('RGB')
transform = build_transform(input_size=input_size)
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(image) for image in images]
pixel_values = torch.stack(pixel_values)
return pixel_values
path = 'OpenGVLab/InternVL3_5-8B'
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
load_in_8bit=False,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True,
device_map="auto").eval()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
# set the max number of tiles in `max_num`
pixel_values = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
generation_config = dict(max_new_tokens=1024, do_sample=True)
# pure-text conversation (็บฏๆๆฌๅฏน่ฏ)
question = 'Hello, who are you?'
response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Can you tell me a story?'
response, history = model.chat(tokenizer, None, question, generation_config, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# single-image single-round conversation (ๅๅพๅ่ฝฎๅฏน่ฏ)
question = '<image>\nPlease describe the image shortly.'
response = model.chat(tokenizer, pixel_values, question, generation_config)
print(f'User: {question}\nAssistant: {response}')
# single-image multi-round conversation (ๅๅพๅค่ฝฎๅฏน่ฏ)
question = '<image>\nPlease describe the image in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Please write a poem according to the image.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# multi-image multi-round conversation, combined images (ๅคๅพๅค่ฝฎๅฏน่ฏ๏ผๆผๆฅๅพๅ)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
question = '<image>\nDescribe the two images in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'What are the similarities and differences between these two images.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# multi-image multi-round conversation, separate images (ๅคๅพๅค่ฝฎๅฏน่ฏ๏ผ็ฌ็ซๅพๅ)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]
question = 'Image-1: <image>\nImage-2: <image>\nDescribe the two images in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list,
history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'What are the similarities and differences between these two images.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list,
history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# batch inference, single image per sample (ๅๅพๆนๅค็)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
questions = ['<image>\nDescribe the image in detail.'] * len(num_patches_list)
responses = model.batch_chat(tokenizer, pixel_values,
num_patches_list=num_patches_list,
questions=questions,
generation_config=generation_config)
for question, response in zip(questions, responses):
print(f'User: {question}\nAssistant: {response}')
# video multi-round conversation (่ง้ขๅค่ฝฎๅฏน่ฏ)
def get_index(bound, fps, max_frame, first_idx=0, num_segments=32):
if bound:
start, end = bound[0], bound[1]
else:
start, end = -100000, 100000
start_idx = max(first_idx, round(start * fps))
end_idx = min(round(end * fps), max_frame)
seg_size = float(end_idx - start_idx) / num_segments
frame_indices = np.array([
int(start_idx + (seg_size / 2) + np.round(seg_size * idx))
for idx in range(num_segments)
])
return frame_indices
def load_video(video_path, bound=None, input_size=448, max_num=1, num_segments=32):
vr = VideoReader(video_path, ctx=cpu(0), num_threads=1)
max_frame = len(vr) - 1
fps = float(vr.get_avg_fps())
pixel_values_list, num_patches_list = [], []
transform = build_transform(input_size=input_size)
frame_indices = get_index(bound, fps, max_frame, first_idx=0, num_segments=num_segments)
for frame_index in frame_indices:
img = Image.fromarray(vr[frame_index].asnumpy()).convert('RGB')
img = dynamic_preprocess(img, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(tile) for tile in img]
pixel_values = torch.stack(pixel_values)
num_patches_list.append(pixel_values.shape[0])
pixel_values_list.append(pixel_values)
pixel_values = torch.cat(pixel_values_list)
return pixel_values, num_patches_list
video_path = './examples/red-panda.mp4'
pixel_values, num_patches_list = load_video(video_path, num_segments=8, max_num=1)
pixel_values = pixel_values.to(torch.bfloat16).cuda()
video_prefix = ''.join([f'Frame{i+1}: <image>\n' for i in range(len(num_patches_list))])
question = video_prefix + 'What is the red panda doing?'
# Frame1: <image>\nFrame2: <image>\n...\nFrame8: <image>\n{question}
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Describe this video in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
```
#### Streaming Output
Besides this method, you can also use the following code to get streamed output.
```python
from transformers import TextIteratorStreamer
from threading import Thread
# Initialize the streamer
streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True, timeout=10)
# Define the generation configuration
generation_config = dict(max_new_tokens=1024, do_sample=False, streamer=streamer)
# Start the model chat in a separate thread
thread = Thread(target=model.chat, kwargs=dict(
tokenizer=tokenizer, pixel_values=pixel_values, question=question,
history=None, return_history=False, generation_config=generation_config,
))
thread.start()
# Initialize an empty string to store the generated text
generated_text = ''
# Loop through the streamer to get the new text as it is generated
for new_text in streamer:
if new_text == model.conv_template.sep:
break
generated_text += new_text
print(new_text, end='', flush=True) # Print each new chunk of generated text on the same line
```
## Finetune
Many repositories now support fine-tuning of the InternVL series models, including [InternVL](https://github.com/OpenGVLab/InternVL), [SWIFT](https://github.com/modelscope/ms-swift), [XTuner](https://github.com/InternLM/xtuner), and others. Please refer to their documentation for more details on fine-tuning.
## Deployment
### LMDeploy
LMDeploy is a toolkit for compressing, deploying, and serving LLMs & VLMs.
```sh
pip install lmdeploy>=0.9.1
```
LMDeploy abstracts the complex inference process of multi-modal Vision-Language Models (VLM) into an easy-to-use pipeline, similar to the Large Language Model (LLM) inference pipeline.
#### A 'Hello, world' Example
```python
from lmdeploy import pipeline, PytorchEngineConfig
from lmdeploy.vl import load_image
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
# Please set tp=2 for the 38B version and tp=8 for the 241B-A28B version.
model = 'OpenGVLab/InternVL3_5-8B'
pipe = pipeline(model, backend_config=PytorchEngineConfig(session_len=32768, tp=1))
response = pipe(('describe this image', image))
print(response.text)
```
#### Multi-images Inference
When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
```python
from lmdeploy import pipeline, PytorchEngineConfig
from lmdeploy.vl import load_image
from lmdeploy.vl.constants import IMAGE_TOKEN
# Please set tp=2 for the 38B version and tp=8 for the 241B-A28B version.
model = 'OpenGVLab/InternVL3_5-8B'
pipe = pipeline(model, backend_config=PytorchEngineConfig(session_len=32768, tp=1))
image_urls=[
'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg',
'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/det.jpg'
]
images = [load_image(img_url) for img_url in image_urls]
# Numbering images improves multi-image conversations
response = pipe((f'Image-1: {IMAGE_TOKEN}\nImage-2: {IMAGE_TOKEN}\ndescribe these two images', images))
print(response.text)
```
#### Batch Prompts Inference
Conducting inference with batch prompts is quite straightforward; just place them within a list structure:
```python
from lmdeploy import pipeline, PytorchEngineConfig
from lmdeploy.vl import load_image
# Please set tp=2 for the 38B version and tp=8 for the 241B-A28B version.
model = 'OpenGVLab/InternVL3_5-8B'
pipe = pipeline(model, backend_config=PytorchEngineConfig(session_len=32768, tp=1))
image_urls=[
"https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg",
"https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/det.jpg"
]
prompts = [('describe this image', load_image(img_url)) for img_url in image_urls]
response = pipe(prompts)
print(response)
```
#### Multi-turn Conversation
There are two ways to do the multi-turn conversations with the pipeline. One is to construct messages according to the format of OpenAI and use above introduced method, the other is to use the `pipeline.chat` interface.
```python
from lmdeploy import pipeline, PytorchEngineConfig, GenerationConfig
from lmdeploy.vl import load_image
# Please set tp=2 for the 38B version and tp=8 for the 241B-A28B version.
model = 'OpenGVLab/InternVL3_5-8B'
pipe = pipeline(model, backend_config=PytorchEngineConfig(session_len=32768, tp=1))
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg')
gen_config = GenerationConfig(top_k=50, top_p=0.95, temperature=0.6, max_new_tokens=8192)
sess = pipe.chat(('describe this image', image), gen_config=gen_config)
print(sess.response.text)
sess = pipe.chat('What is the woman doing?', session=sess, gen_config=gen_config)
print(sess.response.text)
```
#### Service
LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
```shell
lmdeploy serve api_server OpenGVLab/InternVL3_5-8B --server-port 23333 --tp 1 --backend pytorch
```
To use the OpenAI-style interface, you need to install OpenAI:
```shell
pip install openai
```
Then, use the code below to make the API call:
```python
from openai import OpenAI
client = OpenAI(api_key='YOUR_API_KEY', base_url='http://0.0.0.0:23333/v1')
model_name = client.models.list().data[0].id
response = client.chat.completions.create(
model=model_name,
messages=[{
'role':
'user',
'content': [{
'type': 'text',
'text': 'describe this image',
}, {
'type': 'image_url',
'image_url': {
'url':
'https://modelscope.oss-cn-beijing.aliyuncs.com/resource/tiger.jpeg',
},
}],
}],
temperature=0.8,
top_p=0.8)
print(response)
```
## License
This project is released under the apache-2.0 License. This project uses the pre-trained Qwen3 as a component, which is licensed under the apache-2.0 License.
## Citation
If you find this project useful in your research, please consider citing:
```BibTeX
@article{wang2025internvl3_5,
title={InternVL3.5: Advancing Open-Source Multimodal Models in Versatility, Reasoning, and Efficiency},
author={Wang, Weiyun and Gao, Zhangwei and Gu, Lixin and Pu, Hengjun and Cui, Long and Wei, Xingguang and Liu, Zhaoyang and Jing, Linglin and Ye, Shenglong and Shao, Jie and others},
journal={arXiv preprint arXiv:2508.18265},
year={2025}
}
```
|
position-specialist-speculative-decoding/llama3-8b-instruct-hass-reproduce
|
position-specialist-speculative-decoding
| 2025-08-29T17:48:23Z | 5 | 0 | null |
[
"pytorch",
"llama",
"region:us"
] | null | 2025-05-19T21:28:09Z |
# Anonymous Submission
This repository contains the model used for anonymous submission.
If the code fails to auto-download the models, you may manually download the following files.
- `pytorch_model.bin`: Model weights
- `config.json`: Model config
This repository does not contain any author-identifiable information. Please do not distribute.
|
dr-wong-lu-yang-cctv-video-Viral-original/New.full.videos.Dr.wong.Viral.Video.Official.Tutorial
|
dr-wong-lu-yang-cctv-video-Viral-original
| 2025-08-29T15:56:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-29T15:55:48Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1756477963
|
kojeklollipop
| 2025-08-29T15:02:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T15:02:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ChenWu98/numina_qwen_2.5_sft_cluster_v1_weighted_alpha2.0_split_0_no_normalize
|
ChenWu98
| 2025-08-29T14:23:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-29T14:22:25Z |
---
base_model: Qwen/Qwen2.5-1.5B
library_name: transformers
model_name: numina_qwen_2.5_sft_cluster_v1_weighted_alpha2.0_split_0_no_normalize
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for numina_qwen_2.5_sft_cluster_v1_weighted_alpha2.0_split_0_no_normalize
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/y0enbwao)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Rootu/blockassist-bc-snorting_fleecy_goose_1756468813
|
Rootu
| 2025-08-29T12:00:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"snorting fleecy goose",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-29T12:00:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- snorting fleecy goose
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
maxibillion1975/blockassist-bc-iridescent_squeaky_sandpiper_1756418261
|
maxibillion1975
| 2025-08-28T22:24:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"iridescent squeaky sandpiper",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-28T22:24:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- iridescent squeaky sandpiper
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnerYubo/blockassist-bc-hairy_crested_fox_1756396681
|
AnerYubo
| 2025-08-28T15:58:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy crested fox",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-28T15:58:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy crested fox
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
coastalcph/Llama-2-7b-chat-1t_gcd_sycophanct-4t_diff_sycophant
|
coastalcph
| 2025-08-28T09:27:05Z | 0 | 0 | null |
[
"safetensors",
"llama",
"region:us"
] | null | 2025-08-28T09:24:52Z |
# Combined Task Vector Model
This model was created by combining task vectors from multiple fine-tuned models.
## Task Vector Computation
```python
t_1 = TaskVector("meta-llama/Llama-2-7b-chat-hf", "coastalcph/Llama-2-7b-chat-gsm8k_sycophancy")
t_2 = TaskVector("meta-llama/Llama-2-7b-chat-hf", "coastalcph/Llama-2-7b-chat-personality-non-sycophancy")
t_combined = 1.0 * t_1 + 4.0 * t_2 - 4.0 * t_3
new_model = t_combined.apply_to("meta-llama/Llama-2-7b-chat-hf", scaling_coef=1.0)
```
Models Used
- Base Model: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf
- Fine-tuned Model 1: https://huggingface.co/coastalcph/Llama-2-7b-chat-gsm8k_sycophancy
- Fine-tuned Model 2: https://huggingface.co/coastalcph/Llama-2-7b-chat-personality-non-sycophancy
Technical Details
- Creation Script Git Hash: d0db42d73be516ec04f0ecdc8003189e98b5f722
- Task Vector Method: Additive combination
- Args: {
"pretrained_model": "meta-llama/Llama-2-7b-chat-hf",
"finetuned_model1": "coastalcph/Llama-2-7b-chat-gsm8k_sycophancy",
"finetuned_model2": "coastalcph/Llama-2-7b-chat-personality-non-sycophancy",
"finetuned_model3": "coastalcph/Llama-2-7b-chat-personality-sycophancy",
"output_model_name": "coastalcph/Llama-2-7b-chat-1t_gcd_sycophanct-4t_diff_sycophant",
"output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug",
"scaling_coef": 1.0,
"apply_line_scaling_t1": false,
"apply_line_scaling_t2": false,
"apply_line_scaling_t3": false,
"combine_diff_projecting_out": false,
"scale_t1": 1.0,
"scale_t2": 4.0,
"scale_t3": 4.0
}
|
pidbu/blockassist-bc-whistling_alert_shrew_1756340323
|
pidbu
| 2025-08-28T00:20:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling alert shrew",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-28T00:19:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling alert shrew
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bhavana-menon-case-viral-video/Original.videos.bhavana.menon.case.Viral.Video.links.Official.Tutorial
|
bhavana-menon-case-viral-video
| 2025-08-26T19:34:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-26T19:34:03Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/mdfprj9k?viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
pennylin09/uuu_fine_tune_taipower
|
pennylin09
| 2025-06-25T03:12:24Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:37:17Z |
---
license: apache-2.0
---
|
pratyushmathur/q-FrozenLake-v1-4x4-noSlippery
|
pratyushmathur
| 2025-06-25T03:11:06Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-25T03:09:31Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="pratyushmathur/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ryan-wwj/pick-put-RGB01
|
ryan-wwj
| 2025-06-25T03:10:54Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T03:01:39Z |
---
license: apache-2.0
---
|
Johnsonin/q-FrozenLake-v1
|
Johnsonin
| 2025-06-25T03:09:16Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-25T03:07:13Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Johnsonin/q-FrozenLake-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Daniel-xue/uuu_fine_tune_taipower
|
Daniel-xue
| 2025-06-25T03:09:09Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:24:04Z |
---
license: apache-2.0
---
|
John6666/illustrious-semi-realistic-anime-v30-sdxl
|
John6666
| 2025-06-25T03:08:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"realistic",
"semi-realistic",
"girls",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-25T03:02:46Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- realistic
- semi-realistic
- girls
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1711896/illustrious-semi-realistic-anime?modelVersionId=1937224).
This model created by [shishu21](https://civitai.com/user/shishu21).
|
NamVo/mini_r1_unsloth_lora128
|
NamVo
| 2025-06-25T03:08:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"grpo",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-06-25T03:07:21Z |
---
base_model: unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit
library_name: transformers
model_name: mini_r1_unsloth_lora128
tags:
- generated_from_trainer
- unsloth
- trl
- grpo
licence: license
---
# Model Card for mini_r1_unsloth_lora128
This model is a fine-tuned version of [unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="NamVo/mini_r1_unsloth_lora128", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/nvoz1812/huggingface/runs/vbjrbue6)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
eatim/uuu_fine_tune_gpt2
|
eatim
| 2025-06-25T03:07:27Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:28:33Z |
---
license: apache-2.0
---
|
mossynodes/ppo-Huggy
|
mossynodes
| 2025-06-25T03:07:17Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2025-06-25T03:07:11Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mossynodes/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
Hiyuan0105/uuu_fine_tune_taipower
|
Hiyuan0105
| 2025-06-25T03:07:03Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:59:36Z |
---
license: apache-2.0
---
|
iwagoro/layoutlm-docbank
|
iwagoro
| 2025-06-25T03:03:03Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"layoutlm",
"generated_from_trainer",
"base_model:microsoft/layoutlm-base-uncased",
"base_model:finetune:microsoft/layoutlm-base-uncased",
"license:mit",
"region:us"
] | null | 2025-06-23T16:37:55Z |
---
license: mit
base_model: microsoft/layoutlm-base-uncased
tags:
- generated_from_trainer
model-index:
- name: layoutlm-docbank
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-docbank
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2981
- Able: {'precision': 0.7228813559322034, 'recall': 0.8229618909792571, 'f1': 0.7696819309722536, 'number': 2073}
- Aption: {'precision': 0.8535364768683275, 'recall': 0.8798578470709618, 'f1': 0.8664973186565058, 'number': 8723}
- Aragraph: {'precision': 0.7315439151833142, 'recall': 0.7769411439624205, 'f1': 0.7535594242387018, 'number': 43428}
- Ate: {'precision': 0.8031088082901554, 'recall': 0.8333333333333334, 'f1': 0.8179419525065963, 'number': 186}
- Bstract: {'precision': 0.9137055837563451, 'recall': 0.9399477806788512, 'f1': 0.9266409266409267, 'number': 2298}
- Ection: {'precision': 0.9108754155453538, 'recall': 0.9432786885245902, 'f1': 0.9267939115728436, 'number': 6100}
- Eference: {'precision': 0.5945041816009558, 'recall': 0.7409172126265634, 'f1': 0.6596844756728092, 'number': 3358}
- Igure: {'precision': 0.9959514170040485, 'recall': 0.9979716024340771, 'f1': 0.9969604863221885, 'number': 986}
- Ist: {'precision': 0.6354533152909337, 'recall': 0.693853427895981, 'f1': 0.6633705325610961, 'number': 3384}
- Itle: {'precision': 0.8534278959810875, 'recall': 0.8356481481481481, 'f1': 0.8444444444444444, 'number': 864}
- Ooter: {'precision': 0.6076190476190476, 'recall': 0.7057522123893806, 'f1': 0.6530194472876152, 'number': 452}
- Quation: {'precision': 0.6943667406192727, 'recall': 0.7324481074481074, 'f1': 0.7128992324832879, 'number': 19656}
- Uthor: {'precision': 0.5667556742323098, 'recall': 0.616557734204793, 'f1': 0.5906086956521739, 'number': 1377}
- Overall Precision: 0.7417
- Overall Recall: 0.7891
- Overall F1: 0.7647
- Overall Accuracy: 0.9639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Able | Aption | Aragraph | Ate | Bstract | Ection | Eference | Igure | Ist | Itle | Ooter | Quation | Uthor | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.2526 | 1.0 | 1876 | 0.1649 | {'precision': 0.4146422628951747, 'recall': 0.6010612638687892, 'f1': 0.4907443875541552, 'number': 2073} | {'precision': 0.6553778613985576, 'recall': 0.7187894073139974, 'f1': 0.6856205576817933, 'number': 8723} | {'precision': 0.5402088876533895, 'recall': 0.6264621902919775, 'f1': 0.5801471372214522, 'number': 43428} | {'precision': 0.7005649717514124, 'recall': 0.6666666666666666, 'f1': 0.6831955922865013, 'number': 186} | {'precision': 0.7803265940902022, 'recall': 0.8733681462140992, 'f1': 0.8242299794661191, 'number': 2298} | {'precision': 0.8863070539419087, 'recall': 0.8754098360655738, 'f1': 0.8808247422680412, 'number': 6100} | {'precision': 0.5497456189937818, 'recall': 0.5792138177486599, 'f1': 0.5640951276102087, 'number': 3358} | {'precision': 0.9828801611278952, 'recall': 0.9898580121703854, 'f1': 0.9863567458312278, 'number': 986} | {'precision': 0.40529189416211675, 'recall': 0.5703309692671394, 'f1': 0.4738521973974957, 'number': 3384} | {'precision': 0.7797202797202797, 'recall': 0.7743055555555556, 'f1': 0.7770034843205574, 'number': 864} | {'precision': 0.17008797653958943, 'recall': 0.12831858407079647, 'f1': 0.1462799495586381, 'number': 452} | {'precision': 0.538917549099466, 'recall': 0.6058709808709809, 'f1': 0.5704363653781674, 'number': 19656} | {'precision': 0.2181916621548457, 'recall': 0.2926652142338417, 'f1': 0.25, 'number': 1377} | 0.5660 | 0.6469 | 0.6038 | 0.9444 |
| 0.1508 | 2.0 | 3752 | 0.1490 | {'precision': 0.5242351323478859, 'recall': 0.7356488181379643, 'f1': 0.6122039341629867, 'number': 2073} | {'precision': 0.7619706320493722, 'recall': 0.8209331651954602, 'f1': 0.7903537332376801, 'number': 8723} | {'precision': 0.5979018162780395, 'recall': 0.6837293911761997, 'f1': 0.6379417767751638, 'number': 43428} | {'precision': 0.5978260869565217, 'recall': 0.8870967741935484, 'f1': 0.7142857142857144, 'number': 186} | {'precision': 0.8250298923874053, 'recall': 0.9007832898172323, 'f1': 0.8612440191387559, 'number': 2298} | {'precision': 0.8531830642704843, 'recall': 0.9183606557377049, 'f1': 0.8845728722564344, 'number': 6100} | {'precision': 0.6411569749924676, 'recall': 0.6337105419892793, 'f1': 0.6374120113823574, 'number': 3358} | {'precision': 0.987891019172553, 'recall': 0.9929006085192698, 'f1': 0.9903894790085989, 'number': 986} | {'precision': 0.458251953125, 'recall': 0.5546690307328606, 'f1': 0.5018716577540108, 'number': 3384} | {'precision': 0.7446808510638298, 'recall': 0.7696759259259259, 'f1': 0.7569721115537849, 'number': 864} | {'precision': 0.5972850678733032, 'recall': 0.584070796460177, 'f1': 0.5906040268456375, 'number': 452} | {'precision': 0.5535211267605634, 'recall': 0.6597985347985348, 'f1': 0.6020052917420973, 'number': 19656} | {'precision': 0.2989556135770235, 'recall': 0.33260711692084244, 'f1': 0.31488484015125473, 'number': 1377} | 0.6183 | 0.7058 | 0.6592 | 0.9525 |
| 0.1176 | 3.0 | 5628 | 0.1530 | {'precision': 0.5526420341676599, 'recall': 0.6710082006753497, 'f1': 0.6061002178649237, 'number': 2073} | {'precision': 0.7773131767985418, 'recall': 0.8311360770377164, 'f1': 0.8033240997229917, 'number': 8723} | {'precision': 0.6078152985889651, 'recall': 0.6407617205489546, 'f1': 0.6238538280461833, 'number': 43428} | {'precision': 0.5854545454545454, 'recall': 0.8655913978494624, 'f1': 0.6984815618221257, 'number': 186} | {'precision': 0.8378161380971497, 'recall': 0.9081810269799826, 'f1': 0.8715807057840885, 'number': 2298} | {'precision': 0.8598871779234639, 'recall': 0.9245901639344263, 'f1': 0.8910656449956552, 'number': 6100} | {'precision': 0.5440832249674903, 'recall': 0.6229898749255509, 'f1': 0.5808690823268083, 'number': 3358} | {'precision': 0.9929292929292929, 'recall': 0.9969574036511156, 'f1': 0.9949392712550607, 'number': 986} | {'precision': 0.39487179487179486, 'recall': 0.45508274231678486, 'f1': 0.42284459088412957, 'number': 3384} | {'precision': 0.6833667334669339, 'recall': 0.7893518518518519, 'f1': 0.7325456498388828, 'number': 864} | {'precision': 0.43794579172610554, 'recall': 0.6792035398230089, 'f1': 0.5325238508239375, 'number': 452} | {'precision': 0.5741028804376977, 'recall': 0.5445156695156695, 'f1': 0.5589179874148149, 'number': 19656} | {'precision': 0.3929008567931457, 'recall': 0.4662309368191721, 'f1': 0.4264363998671538, 'number': 1377} | 0.6277 | 0.6600 | 0.6435 | 0.9527 |
| 0.0871 | 4.0 | 7504 | 0.1564 | {'precision': 0.6151919866444073, 'recall': 0.7110467920887602, 'f1': 0.6596554038934884, 'number': 2073} | {'precision': 0.7617387738363748, 'recall': 0.8517711796400321, 'f1': 0.8042431130594794, 'number': 8723} | {'precision': 0.6353752874764792, 'recall': 0.6997789444597955, 'f1': 0.6660238006530934, 'number': 43428} | {'precision': 0.6217228464419475, 'recall': 0.8924731182795699, 'f1': 0.7328918322295807, 'number': 186} | {'precision': 0.8827993254637436, 'recall': 0.9112271540469974, 'f1': 0.8967880085653105, 'number': 2298} | {'precision': 0.8789195901893821, 'recall': 0.9281967213114755, 'f1': 0.9028863020251954, 'number': 6100} | {'precision': 0.5240302512808002, 'recall': 0.6396664681357951, 'f1': 0.5761029904787448, 'number': 3358} | {'precision': 0.9828629032258065, 'recall': 0.9888438133874239, 'f1': 0.9858442871587463, 'number': 986} | {'precision': 0.48228571428571426, 'recall': 0.6235224586288416, 'f1': 0.5438845212011857, 'number': 3384} | {'precision': 0.8669301712779973, 'recall': 0.7615740740740741, 'f1': 0.8108441158348736, 'number': 864} | {'precision': 0.542016806722689, 'recall': 0.5707964601769911, 'f1': 0.5560344827586207, 'number': 452} | {'precision': 0.6165904637491836, 'recall': 0.6723646723646723, 'f1': 0.6432708688245315, 'number': 19656} | {'precision': 0.46214852198990625, 'recall': 0.46550472040668117, 'f1': 0.4638205499276411, 'number': 1377} | 0.6553 | 0.7237 | 0.6878 | 0.9542 |
| 0.0676 | 5.0 | 9380 | 0.1583 | {'precision': 0.6492985971943888, 'recall': 0.7814761215629522, 'f1': 0.7092819614711033, 'number': 2073} | {'precision': 0.8149818501814982, 'recall': 0.8493637510030952, 'f1': 0.8318176714943303, 'number': 8723} | {'precision': 0.6827026670477782, 'recall': 0.7149765128488533, 'f1': 0.6984669718476194, 'number': 43428} | {'precision': 0.9294871794871795, 'recall': 0.7795698924731183, 'f1': 0.847953216374269, 'number': 186} | {'precision': 0.8599190283400809, 'recall': 0.9242819843342036, 'f1': 0.890939597315436, 'number': 2298} | {'precision': 0.8848062015503876, 'recall': 0.9355737704918032, 'f1': 0.9094820717131474, 'number': 6100} | {'precision': 0.5955380577427821, 'recall': 0.6756998213222156, 'f1': 0.6330915178571428, 'number': 3358} | {'precision': 0.992936427850656, 'recall': 0.9979716024340771, 'f1': 0.9954476479514417, 'number': 986} | {'precision': 0.5794343113930743, 'recall': 0.6477541371158393, 'f1': 0.6116924794195621, 'number': 3384} | {'precision': 0.8134243458475541, 'recall': 0.8275462962962963, 'f1': 0.8204245553643145, 'number': 864} | {'precision': 0.6065573770491803, 'recall': 0.6548672566371682, 'f1': 0.6297872340425531, 'number': 452} | {'precision': 0.6497243107769424, 'recall': 0.6594424094424094, 'f1': 0.654547290814523, 'number': 19656} | {'precision': 0.46639784946236557, 'recall': 0.5039941902687001, 'f1': 0.4844677137870855, 'number': 1377} | 0.6989 | 0.7339 | 0.7160 | 0.9598 |
| 0.0512 | 6.0 | 11256 | 0.1844 | {'precision': 0.645, 'recall': 0.7467438494934877, 'f1': 0.6921529175050302, 'number': 2073} | {'precision': 0.8094872076424728, 'recall': 0.8451220910237304, 'f1': 0.8269209197980932, 'number': 8723} | {'precision': 0.6710134048257372, 'recall': 0.7204107948788799, 'f1': 0.6948352636780563, 'number': 43428} | {'precision': 0.6753246753246753, 'recall': 0.8387096774193549, 'f1': 0.7482014388489209, 'number': 186} | {'precision': 0.8834745762711864, 'recall': 0.9073107049608355, 'f1': 0.8952340060111635, 'number': 2298} | {'precision': 0.9024081115335868, 'recall': 0.9337704918032786, 'f1': 0.9178214631002256, 'number': 6100} | {'precision': 0.4868008948545861, 'recall': 0.6480047647409172, 'f1': 0.5559529892692898, 'number': 3358} | {'precision': 0.9929292929292929, 'recall': 0.9969574036511156, 'f1': 0.9949392712550607, 'number': 986} | {'precision': 0.5424300867888139, 'recall': 0.6648936170212766, 'f1': 0.5974508762612852, 'number': 3384} | {'precision': 0.7554179566563467, 'recall': 0.8472222222222222, 'f1': 0.7986906710310966, 'number': 864} | {'precision': 0.6563981042654028, 'recall': 0.6128318584070797, 'f1': 0.6338672768878719, 'number': 452} | {'precision': 0.650782911270056, 'recall': 0.685083435083435, 'f1': 0.6674928125309805, 'number': 19656} | {'precision': 0.4430835734870317, 'recall': 0.4466230936819172, 'f1': 0.4448462929475588, 'number': 1377} | 0.6856 | 0.7390 | 0.7113 | 0.9578 |
| 0.0389 | 7.0 | 13132 | 0.2002 | {'precision': 0.6875749101078705, 'recall': 0.8301977809937289, 'f1': 0.7521853146853146, 'number': 2073} | {'precision': 0.798666243251826, 'recall': 0.8649547174137338, 'f1': 0.8304898183819481, 'number': 8723} | {'precision': 0.6971504451749134, 'recall': 0.7374274661508704, 'f1': 0.7167235494880546, 'number': 43428} | {'precision': 0.774869109947644, 'recall': 0.7956989247311828, 'f1': 0.7851458885941645, 'number': 186} | {'precision': 0.8827004219409282, 'recall': 0.9103568320278503, 'f1': 0.8963153384747214, 'number': 2298} | {'precision': 0.9097432626375379, 'recall': 0.9352459016393443, 'f1': 0.9223183251151887, 'number': 6100} | {'precision': 0.6794092093831451, 'recall': 0.6986301369863014, 'f1': 0.6888856261929232, 'number': 3358} | {'precision': 0.9959473150962512, 'recall': 0.9969574036511156, 'f1': 0.9964521033958439, 'number': 986} | {'precision': 0.5793751587503175, 'recall': 0.6740543735224587, 'f1': 0.6231389154487093, 'number': 3384} | {'precision': 0.834128878281623, 'recall': 0.8090277777777778, 'f1': 0.8213866039952996, 'number': 864} | {'precision': 0.6046511627906976, 'recall': 0.6327433628318584, 'f1': 0.6183783783783783, 'number': 452} | {'precision': 0.6526806526806527, 'recall': 0.698005698005698, 'f1': 0.6745826880055068, 'number': 19656} | {'precision': 0.46461949265687585, 'recall': 0.5054466230936819, 'f1': 0.4841739130434783, 'number': 1377} | 0.7101 | 0.7563 | 0.7325 | 0.9579 |
| 0.0281 | 8.0 | 15008 | 0.2068 | {'precision': 0.7080638206123329, 'recall': 0.7920887602508442, 'f1': 0.7477231329690345, 'number': 2073} | {'precision': 0.8085677474769165, 'recall': 0.8633497649891092, 'f1': 0.8350612629594723, 'number': 8723} | {'precision': 0.7156752540662064, 'recall': 0.7183614258082344, 'f1': 0.7170158241303624, 'number': 43428} | {'precision': 0.578397212543554, 'recall': 0.8924731182795699, 'f1': 0.7019027484143764, 'number': 186} | {'precision': 0.8733221476510067, 'recall': 0.9060052219321149, 'f1': 0.8893635198633062, 'number': 2298} | {'precision': 0.9074427480916031, 'recall': 0.9354098360655738, 'f1': 0.9212140781401356, 'number': 6100} | {'precision': 0.6934523809523809, 'recall': 0.6938653960690887, 'f1': 0.6936588270318546, 'number': 3358} | {'precision': 0.9979736575481256, 'recall': 0.9989858012170385, 'f1': 0.9984794728839331, 'number': 986} | {'precision': 0.5753681392235609, 'recall': 0.6350472813238771, 'f1': 0.6037364798426745, 'number': 3384} | {'precision': 0.8312958435207825, 'recall': 0.7870370370370371, 'f1': 0.8085612366230678, 'number': 864} | {'precision': 0.5778688524590164, 'recall': 0.6238938053097345, 'f1': 0.6, 'number': 452} | {'precision': 0.693010752688172, 'recall': 0.6557794057794057, 'f1': 0.6738812212463404, 'number': 19656} | {'precision': 0.5053262316910786, 'recall': 0.55119825708061, 'f1': 0.5272664119485932, 'number': 1377} | 0.7302 | 0.7364 | 0.7333 | 0.9604 |
| 0.0222 | 9.0 | 16884 | 0.2193 | {'precision': 0.6235811058220432, 'recall': 0.8215147129763628, 'f1': 0.708992506244796, 'number': 2073} | {'precision': 0.8264917003140422, 'recall': 0.8447781726470251, 'f1': 0.8355348942683827, 'number': 8723} | {'precision': 0.7017585809621112, 'recall': 0.7433683337938657, 'f1': 0.7219644194965952, 'number': 43428} | {'precision': 0.90625, 'recall': 0.7795698924731183, 'f1': 0.838150289017341, 'number': 186} | {'precision': 0.8704156479217604, 'recall': 0.9295039164490861, 'f1': 0.898989898989899, 'number': 2298} | {'precision': 0.9143317230273752, 'recall': 0.9308196721311476, 'f1': 0.922502030869212, 'number': 6100} | {'precision': 0.5801470588235295, 'recall': 0.704883859440143, 'f1': 0.6364614143586985, 'number': 3358} | {'precision': 0.9949443882709808, 'recall': 0.9979716024340771, 'f1': 0.9964556962025316, 'number': 986} | {'precision': 0.6149458071876782, 'recall': 0.6371158392434988, 'f1': 0.6258345428156749, 'number': 3384} | {'precision': 0.8431137724550898, 'recall': 0.8148148148148148, 'f1': 0.8287227781047676, 'number': 864} | {'precision': 0.6629711751662971, 'recall': 0.661504424778761, 'f1': 0.6622369878183831, 'number': 452} | {'precision': 0.6730908214887978, 'recall': 0.7107244607244607, 'f1': 0.6913959070550098, 'number': 19656} | {'precision': 0.5108055009823183, 'recall': 0.5664488017429193, 'f1': 0.537190082644628, 'number': 1377} | 0.7156 | 0.7598 | 0.7371 | 0.9596 |
| 0.0162 | 10.0 | 18760 | 0.2114 | {'precision': 0.6486062033765214, 'recall': 0.7969126869271587, 'f1': 0.7151515151515152, 'number': 2073} | {'precision': 0.8267941532036488, 'recall': 0.8624326493178952, 'f1': 0.8442374593199417, 'number': 8723} | {'precision': 0.7077005538681437, 'recall': 0.7296674956249425, 'f1': 0.7185161670672531, 'number': 43428} | {'precision': 0.9085365853658537, 'recall': 0.8010752688172043, 'f1': 0.8514285714285714, 'number': 186} | {'precision': 0.844675740592474, 'recall': 0.918189730200174, 'f1': 0.8798999165971642, 'number': 2298} | {'precision': 0.9145987753786659, 'recall': 0.9304918032786885, 'f1': 0.9224768405655778, 'number': 6100} | {'precision': 0.5639344262295082, 'recall': 0.6658725431804645, 'f1': 0.6106786835996176, 'number': 3358} | {'precision': 0.9959514170040485, 'recall': 0.9979716024340771, 'f1': 0.9969604863221885, 'number': 986} | {'precision': 0.6411883472743005, 'recall': 0.6569148936170213, 'f1': 0.6489563567362429, 'number': 3384} | {'precision': 0.8991935483870968, 'recall': 0.7743055555555556, 'f1': 0.832089552238806, 'number': 864} | {'precision': 0.5370018975332068, 'recall': 0.6261061946902655, 'f1': 0.5781409601634322, 'number': 452} | {'precision': 0.6896913159687648, 'recall': 0.6695156695156695, 'f1': 0.6794537522265535, 'number': 19656} | {'precision': 0.5298196948682385, 'recall': 0.5548293391430646, 'f1': 0.5420361830436324, 'number': 1377} | 0.7237 | 0.7441 | 0.7338 | 0.9611 |
| 0.0138 | 11.0 | 20636 | 0.2391 | {'precision': 0.664185277088503, 'recall': 0.7747226242161119, 'f1': 0.7152081941661101, 'number': 2073} | {'precision': 0.8144112087178917, 'recall': 0.8396193969964462, 'f1': 0.8268232106570332, 'number': 8723} | {'precision': 0.7044044130322358, 'recall': 0.7527401676337847, 'f1': 0.7277706042121198, 'number': 43428} | {'precision': 0.8324022346368715, 'recall': 0.8010752688172043, 'f1': 0.8164383561643834, 'number': 186} | {'precision': 0.8978132884777124, 'recall': 0.9290687554395126, 'f1': 0.9131736526946108, 'number': 2298} | {'precision': 0.9141269841269841, 'recall': 0.9440983606557377, 'f1': 0.9288709677419354, 'number': 6100} | {'precision': 0.5543908688562776, 'recall': 0.7087552114353782, 'f1': 0.6221408966148216, 'number': 3358} | {'precision': 0.9949494949494949, 'recall': 0.9989858012170385, 'f1': 0.9969635627530363, 'number': 986} | {'precision': 0.5981259760541384, 'recall': 0.6790780141843972, 'f1': 0.6360365347356767, 'number': 3384} | {'precision': 0.8146453089244852, 'recall': 0.8240740740740741, 'f1': 0.8193325661680093, 'number': 864} | {'precision': 0.6401673640167364, 'recall': 0.6769911504424779, 'f1': 0.6580645161290323, 'number': 452} | {'precision': 0.6891924859721883, 'recall': 0.7186100936100936, 'f1': 0.7035939329032901, 'number': 19656} | {'precision': 0.530638852672751, 'recall': 0.5911401597676107, 'f1': 0.5592579869460667, 'number': 1377} | 0.7187 | 0.7674 | 0.7423 | 0.9601 |
| 0.0099 | 12.0 | 22512 | 0.2190 | {'precision': 0.5986635220125787, 'recall': 0.7346840328027014, 'f1': 0.6597357591509638, 'number': 2073} | {'precision': 0.8261346196009647, 'recall': 0.863922962283618, 'f1': 0.844606332305968, 'number': 8723} | {'precision': 0.7126507076708021, 'recall': 0.7513125172699641, 'f1': 0.7314711025422589, 'number': 43428} | {'precision': 0.8630952380952381, 'recall': 0.7795698924731183, 'f1': 0.8192090395480226, 'number': 186} | {'precision': 0.8786008230452675, 'recall': 0.9290687554395126, 'f1': 0.9031302876480541, 'number': 2298} | {'precision': 0.8979878334113243, 'recall': 0.9437704918032787, 'f1': 0.9203101270881625, 'number': 6100} | {'precision': 0.5727510087823404, 'recall': 0.7185824895771292, 'f1': 0.6374323074891033, 'number': 3358} | {'precision': 0.9969604863221885, 'recall': 0.9979716024340771, 'f1': 0.9974657881398886, 'number': 986} | {'precision': 0.6077103412346966, 'recall': 0.6894208037825059, 'f1': 0.6459919700955282, 'number': 3384} | {'precision': 0.8236632536973834, 'recall': 0.8379629629629629, 'f1': 0.8307515777395296, 'number': 864} | {'precision': 0.6161417322834646, 'recall': 0.6924778761061947, 'f1': 0.6520833333333333, 'number': 452} | {'precision': 0.705915521837195, 'recall': 0.7006003256003256, 'f1': 0.7032478807067715, 'number': 19656} | {'precision': 0.4981527093596059, 'recall': 0.5875090777051561, 'f1': 0.5391536154615129, 'number': 1377} | 0.7251 | 0.7652 | 0.7446 | 0.9624 |
| 0.0084 | 13.0 | 24388 | 0.2592 | {'precision': 0.6832247557003257, 'recall': 0.8094548962855764, 'f1': 0.741002428792228, 'number': 2073} | {'precision': 0.8483670295489891, 'recall': 0.8755015476326952, 'f1': 0.8617207334273626, 'number': 8723} | {'precision': 0.7274626600284495, 'recall': 0.7536612323846367, 'f1': 0.7403302420266908, 'number': 43428} | {'precision': 0.8361581920903954, 'recall': 0.7956989247311828, 'f1': 0.815426997245179, 'number': 186} | {'precision': 0.9015565839293227, 'recall': 0.9325500435161009, 'f1': 0.9167914438502675, 'number': 2298} | {'precision': 0.9054671498345676, 'recall': 0.9421311475409836, 'f1': 0.9234353659516349, 'number': 6100} | {'precision': 0.6139511458071015, 'recall': 0.726027397260274, 'f1': 0.6653022240414791, 'number': 3358} | {'precision': 0.9959514170040485, 'recall': 0.9979716024340771, 'f1': 0.9969604863221885, 'number': 986} | {'precision': 0.6385964912280702, 'recall': 0.6453900709219859, 'f1': 0.6419753086419754, 'number': 3384} | {'precision': 0.7660223804679552, 'recall': 0.8715277777777778, 'f1': 0.8153762858689767, 'number': 864} | {'precision': 0.6666666666666666, 'recall': 0.6548672566371682, 'f1': 0.6607142857142857, 'number': 452} | {'precision': 0.6978891162233645, 'recall': 0.7114875864875865, 'f1': 0.7046227484569845, 'number': 19656} | {'precision': 0.5463768115942029, 'recall': 0.5475671750181554, 'f1': 0.5469713456655786, 'number': 1377} | 0.7401 | 0.7695 | 0.7545 | 0.9625 |
| 0.0073 | 14.0 | 26264 | 0.2561 | {'precision': 0.7177685950413223, 'recall': 0.8379160636758322, 'f1': 0.7732027598486534, 'number': 2073} | {'precision': 0.8424081451969898, 'recall': 0.8726355611601513, 'f1': 0.857255476096627, 'number': 8723} | {'precision': 0.7259802747599661, 'recall': 0.7678226029289859, 'f1': 0.7463154242997349, 'number': 43428} | {'precision': 0.8418079096045198, 'recall': 0.8010752688172043, 'f1': 0.8209366391184573, 'number': 186} | {'precision': 0.8990787269681743, 'recall': 0.9342906875543951, 'f1': 0.9163465642338883, 'number': 2298} | {'precision': 0.9077385662288336, 'recall': 0.940327868852459, 'f1': 0.9237458732587165, 'number': 6100} | {'precision': 0.653671562082777, 'recall': 0.7290053603335319, 'f1': 0.6892862170913698, 'number': 3358} | {'precision': 0.9929292929292929, 'recall': 0.9969574036511156, 'f1': 0.9949392712550607, 'number': 986} | {'precision': 0.6193029490616622, 'recall': 0.6826241134751773, 'f1': 0.649423671633399, 'number': 3384} | {'precision': 0.7925531914893617, 'recall': 0.8622685185185185, 'f1': 0.8259423503325941, 'number': 864} | {'precision': 0.6074950690335306, 'recall': 0.6814159292035398, 'f1': 0.6423357664233577, 'number': 452} | {'precision': 0.6859747275007234, 'recall': 0.7235958485958486, 'f1': 0.7042832384253528, 'number': 19656} | {'precision': 0.5440105890138981, 'recall': 0.5969498910675382, 'f1': 0.569252077562327, 'number': 1377} | 0.7372 | 0.7812 | 0.7586 | 0.9625 |
| 0.0052 | 15.0 | 28140 | 0.2620 | {'precision': 0.7276975361087511, 'recall': 0.8263386396526773, 'f1': 0.7738875084707477, 'number': 2073} | {'precision': 0.8463771352015184, 'recall': 0.869081737934197, 'f1': 0.857579185520362, 'number': 8723} | {'precision': 0.7304345910702879, 'recall': 0.7635857050750667, 'f1': 0.7466423497360037, 'number': 43428} | {'precision': 0.6781115879828327, 'recall': 0.8494623655913979, 'f1': 0.7541766109785203, 'number': 186} | {'precision': 0.8993736951983299, 'recall': 0.9373368146214099, 'f1': 0.9179629235030897, 'number': 2298} | {'precision': 0.9117043121149897, 'recall': 0.9462295081967214, 'f1': 0.9286461266189365, 'number': 6100} | {'precision': 0.6430079155672823, 'recall': 0.7257296009529481, 'f1': 0.6818690542809177, 'number': 3358} | {'precision': 0.9949392712550608, 'recall': 0.9969574036511156, 'f1': 0.9959473150962513, 'number': 986} | {'precision': 0.6221982176613556, 'recall': 0.6808510638297872, 'f1': 0.6502045999717792, 'number': 3384} | {'precision': 0.7815126050420168, 'recall': 0.8611111111111112, 'f1': 0.8193832599118943, 'number': 864} | {'precision': 0.5786713286713286, 'recall': 0.7323008849557522, 'f1': 0.6464843749999999, 'number': 452} | {'precision': 0.7015840321710558, 'recall': 0.7278184778184779, 'f1': 0.7144605089020399, 'number': 19656} | {'precision': 0.5367936925098554, 'recall': 0.5933188090050835, 'f1': 0.5636426353915143, 'number': 1377} | 0.7425 | 0.7801 | 0.7609 | 0.9624 |
| 0.0042 | 16.0 | 30016 | 0.2755 | {'precision': 0.697255223269152, 'recall': 0.8210323203087313, 'f1': 0.7540983606557377, 'number': 2073} | {'precision': 0.8434147959747871, 'recall': 0.8743551530436776, 'f1': 0.858606326691433, 'number': 8723} | {'precision': 0.7236266459774574, 'recall': 0.7731647784839274, 'f1': 0.7475759498602902, 'number': 43428} | {'precision': 0.8277777777777777, 'recall': 0.8010752688172043, 'f1': 0.8142076502732241, 'number': 186} | {'precision': 0.9060402684563759, 'recall': 0.9399477806788512, 'f1': 0.922682614267407, 'number': 2298} | {'precision': 0.9122500793398921, 'recall': 0.9424590163934427, 'f1': 0.9271085308821158, 'number': 6100} | {'precision': 0.6392307692307693, 'recall': 0.7424061941631924, 'f1': 0.6869661063653899, 'number': 3358} | {'precision': 0.9959514170040485, 'recall': 0.9979716024340771, 'f1': 0.9969604863221885, 'number': 986} | {'precision': 0.6294667399670149, 'recall': 0.6767139479905437, 'f1': 0.6522358302477927, 'number': 3384} | {'precision': 0.8457943925233645, 'recall': 0.8379629629629629, 'f1': 0.8418604651162791, 'number': 864} | {'precision': 0.5521885521885522, 'recall': 0.7256637168141593, 'f1': 0.6271510516252391, 'number': 452} | {'precision': 0.6809746954076851, 'recall': 0.7393162393162394, 'f1': 0.7089472143623768, 'number': 19656} | {'precision': 0.5562870309414089, 'recall': 0.6136528685548294, 'f1': 0.5835635359116023, 'number': 1377} | 0.7346 | 0.7876 | 0.7602 | 0.9623 |
| 0.0033 | 17.0 | 31892 | 0.2743 | {'precision': 0.7272325375773652, 'recall': 0.7935359382537386, 'f1': 0.7589388696655133, 'number': 2073} | {'precision': 0.845837501389352, 'recall': 0.8724062822423478, 'f1': 0.8589164785553048, 'number': 8723} | {'precision': 0.7257006300238975, 'recall': 0.7691811734364926, 'f1': 0.7468085582060856, 'number': 43428} | {'precision': 0.8869047619047619, 'recall': 0.8010752688172043, 'f1': 0.8418079096045197, 'number': 186} | {'precision': 0.9024800336275746, 'recall': 0.9342906875543951, 'f1': 0.9181098995082317, 'number': 2298} | {'precision': 0.9123361238350971, 'recall': 0.9468852459016394, 'f1': 0.9292896790282358, 'number': 6100} | {'precision': 0.5567105567105567, 'recall': 0.7176891006551519, 'f1': 0.6270326525302459, 'number': 3358} | {'precision': 0.993933265925177, 'recall': 0.9969574036511156, 'f1': 0.9954430379746836, 'number': 986} | {'precision': 0.6185107498689041, 'recall': 0.6971040189125296, 'f1': 0.6554598499583218, 'number': 3384} | {'precision': 0.8841309823677582, 'recall': 0.8125, 'f1': 0.8468033775633294, 'number': 864} | {'precision': 0.6304347826086957, 'recall': 0.7057522123893806, 'f1': 0.6659707724425887, 'number': 452} | {'precision': 0.7017227075301352, 'recall': 0.7315323565323565, 'f1': 0.7163175330659826, 'number': 19656} | {'precision': 0.5604838709677419, 'recall': 0.6056644880174292, 'f1': 0.5821989528795811, 'number': 1377} | 0.7377 | 0.7829 | 0.7596 | 0.9630 |
| 0.003 | 18.0 | 33768 | 0.2938 | {'precision': 0.7085594989561587, 'recall': 0.818620356970574, 'f1': 0.7596239928379588, 'number': 2073} | {'precision': 0.8580645161290322, 'recall': 0.869081737934197, 'f1': 0.8635379883813646, 'number': 8723} | {'precision': 0.7304742970746947, 'recall': 0.7699180252371741, 'f1': 0.7496776941962533, 'number': 43428} | {'precision': 0.6926406926406926, 'recall': 0.8602150537634409, 'f1': 0.7673860911270983, 'number': 186} | {'precision': 0.9013848090642048, 'recall': 0.9347258485639687, 'f1': 0.9177526169621877, 'number': 2298} | {'precision': 0.9117088607594936, 'recall': 0.9445901639344262, 'f1': 0.9278582930756843, 'number': 6100} | {'precision': 0.6144427786106946, 'recall': 0.7322811197141156, 'f1': 0.6682065217391304, 'number': 3358} | {'precision': 0.9959514170040485, 'recall': 0.9979716024340771, 'f1': 0.9969604863221885, 'number': 986} | {'precision': 0.6367369285518751, 'recall': 0.6873522458628841, 'f1': 0.6610771635640187, 'number': 3384} | {'precision': 0.8362168396770473, 'recall': 0.8391203703703703, 'f1': 0.8376660889659157, 'number': 864} | {'precision': 0.6334661354581673, 'recall': 0.7035398230088495, 'f1': 0.6666666666666667, 'number': 452} | {'precision': 0.6995040357872216, 'recall': 0.7318884818884819, 'f1': 0.7153299189498284, 'number': 19656} | {'precision': 0.5398574206092028, 'recall': 0.6049382716049383, 'f1': 0.5705479452054795, 'number': 1377} | 0.7426 | 0.7839 | 0.7627 | 0.9631 |
| 0.0025 | 19.0 | 35644 | 0.2990 | {'precision': 0.707874337005304, 'recall': 0.8369512783405693, 'f1': 0.7670203359858533, 'number': 2073} | {'precision': 0.8577489950870925, 'recall': 0.8806603232832741, 'f1': 0.8690536795067595, 'number': 8723} | {'precision': 0.7345506842151137, 'recall': 0.7762273187805103, 'f1': 0.7548141513658755, 'number': 43428} | {'precision': 0.8105263157894737, 'recall': 0.8279569892473119, 'f1': 0.8191489361702128, 'number': 186} | {'precision': 0.9, 'recall': 0.9399477806788512, 'f1': 0.9195402298850573, 'number': 2298} | {'precision': 0.908573236317621, 'recall': 0.9416393442622951, 'f1': 0.9248108195137659, 'number': 6100} | {'precision': 0.61839821472849, 'recall': 0.7427039904705182, 'f1': 0.6748748477878501, 'number': 3358} | {'precision': 0.9959514170040485, 'recall': 0.9979716024340771, 'f1': 0.9969604863221885, 'number': 986} | {'precision': 0.6411716842961758, 'recall': 0.6985815602836879, 'f1': 0.6686465846414934, 'number': 3384} | {'precision': 0.8677184466019418, 'recall': 0.8275462962962963, 'f1': 0.8471563981042655, 'number': 864} | {'precision': 0.6414342629482072, 'recall': 0.7123893805309734, 'f1': 0.6750524109014675, 'number': 452} | {'precision': 0.6951624548736463, 'recall': 0.7347374847374848, 'f1': 0.7144023150552794, 'number': 19656} | {'precision': 0.5524115755627009, 'recall': 0.6238198983297023, 'f1': 0.5859481582537517, 'number': 1377} | 0.7443 | 0.7898 | 0.7664 | 0.9635 |
| 0.0021 | 20.0 | 37520 | 0.2981 | {'precision': 0.7228813559322034, 'recall': 0.8229618909792571, 'f1': 0.7696819309722536, 'number': 2073} | {'precision': 0.8535364768683275, 'recall': 0.8798578470709618, 'f1': 0.8664973186565058, 'number': 8723} | {'precision': 0.7315439151833142, 'recall': 0.7769411439624205, 'f1': 0.7535594242387018, 'number': 43428} | {'precision': 0.8031088082901554, 'recall': 0.8333333333333334, 'f1': 0.8179419525065963, 'number': 186} | {'precision': 0.9137055837563451, 'recall': 0.9399477806788512, 'f1': 0.9266409266409267, 'number': 2298} | {'precision': 0.9108754155453538, 'recall': 0.9432786885245902, 'f1': 0.9267939115728436, 'number': 6100} | {'precision': 0.5945041816009558, 'recall': 0.7409172126265634, 'f1': 0.6596844756728092, 'number': 3358} | {'precision': 0.9959514170040485, 'recall': 0.9979716024340771, 'f1': 0.9969604863221885, 'number': 986} | {'precision': 0.6354533152909337, 'recall': 0.693853427895981, 'f1': 0.6633705325610961, 'number': 3384} | {'precision': 0.8534278959810875, 'recall': 0.8356481481481481, 'f1': 0.8444444444444444, 'number': 864} | {'precision': 0.6076190476190476, 'recall': 0.7057522123893806, 'f1': 0.6530194472876152, 'number': 452} | {'precision': 0.6943667406192727, 'recall': 0.7324481074481074, 'f1': 0.7128992324832879, 'number': 19656} | {'precision': 0.5667556742323098, 'recall': 0.616557734204793, 'f1': 0.5906086956521739, 'number': 1377} | 0.7417 | 0.7891 | 0.7647 | 0.9639 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.15.1
|
std10012/uuu_fine_tune_taipower
|
std10012
| 2025-06-25T03:02:49Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:25:10Z |
---
license: apache-2.0
---
|
Yuichi1218/Llama-3.1-Non-filter-Lafeak73-8B-chatvector
|
Yuichi1218
| 2025-06-25T03:02:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T02:55:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Bill0204Tung/uuu_fine_tune_taipower
|
Bill0204Tung
| 2025-06-25T03:01:48Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:23:59Z |
---
license: apache-2.0
---
|
tracylu00200/uuu_fine_tune_taipower
|
tracylu00200
| 2025-06-25T03:01:41Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:31:47Z |
---
license: apache-2.0
---
|
Cameron914/uuu_fine_tune_taipower
|
Cameron914
| 2025-06-25T03:00:26Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T01:34:11Z |
---
license: apache-2.0
---
|
Baistiac/uuu_fine_tune_taipower
|
Baistiac
| 2025-06-25T02:59:56Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:43:50Z |
---
license: apache-2.0
---
|
JS1016/uuu_fine_tune_taipower
|
JS1016
| 2025-06-25T02:59:23Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:25:52Z |
---
license: apache-2.0
---
|
Hiyuan0105/tcp2023
|
Hiyuan0105
| 2025-06-25T02:59:22Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:59:22Z |
---
license: apache-2.0
---
|
SrivatsaBhamidipati/CodeLlama-13b-Instruct-hf
|
SrivatsaBhamidipati
| 2025-06-25T02:57:53Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:codellama/CodeLlama-13b-Instruct-hf",
"base_model:adapter:codellama/CodeLlama-13b-Instruct-hf",
"license:llama2",
"region:us"
] | null | 2025-06-25T00:57:34Z |
---
library_name: peft
license: llama2
base_model: codellama/CodeLlama-13b-Instruct-hf
tags:
- generated_from_trainer
model-index:
- name: CodeLlama-13b-Instruct-hf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CodeLlama-13b-Instruct-hf
This model is a fine-tuned version of [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mlx-community/Cydonia-24B-v3.1-bf16
|
mlx-community
| 2025-06-25T02:55:35Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"mistral",
"text-generation",
"base_model:TheDrummer/Cydonia-24B-v3.1",
"base_model:finetune:TheDrummer/Cydonia-24B-v3.1",
"region:us"
] |
text-generation
| 2025-06-25T02:41:48Z |
---
base_model: TheDrummer/Cydonia-24B-v3.1
tags:
- mlx
library_name: mlx
pipeline_tag: text-generation
---
# mlx-community/Cydonia-24B-v3.1-bf16
This model [mlx-community/Cydonia-24B-v3.1-bf16](https://huggingface.co/mlx-community/Cydonia-24B-v3.1-bf16) was
converted to MLX format from [TheDrummer/Cydonia-24B-v3.1](https://huggingface.co/TheDrummer/Cydonia-24B-v3.1)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Cydonia-24B-v3.1-bf16")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Louischong/Trellis-OA
|
Louischong
| 2025-06-25T02:50:46Z | 0 | 0 |
trellis-oa
|
[
"trellis-oa",
"image-to-3d",
"en",
"arxiv:2506.08640",
"license:mit",
"region:us"
] |
image-to-3d
| 2025-06-25T01:51:40Z |
---
library_name: trellis-oa
pipeline_tag: image-to-3d
license: mit
language:
- en
---
# TRELLIS-OA
<!-- Provide a quick summary of what the model is/does. -->
TRELLIS-OA, a large 3D genetive model produces orientation-aligned 3D objects. It was introduced in the paper [Orientation Matters: Making 3D Generative Models Orientation-Aligned](https://huggingface.co/papers/2506.08640).
Project page: https://xdimlab.github.io/Orientation_Matters/
Code: https://github.com/YichongLu/Orientation_Matters
|
fancyerii/taxi-v3
|
fancyerii
| 2025-06-25T02:50:43Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-25T02:50:42Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="fancyerii/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jyunjia/SB0625
|
jyunjia
| 2025-06-25T02:50:21Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:31:50Z |
---
license: apache-2.0
---
|
ZeeeWP/Qwen3-8B_Qwen3-0.6B
|
ZeeeWP
| 2025-06-25T02:50:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"customize_ensemble",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] |
feature-extraction
| 2025-06-25T02:48:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Cem13/lora_model1_48_0099
|
Cem13
| 2025-06-25T02:49:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-25T02:47:32Z |
---
base_model: unsloth/Qwen2.5-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Cem13
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
daixuancheng/sac_static0.1_constrainbyAdv_step160
|
daixuancheng
| 2025-06-25T02:49:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-24T06:17:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sergioalves/e3e2072b-f7c2-4357-bf7c-feb72a43dfc6
|
sergioalves
| 2025-06-25T02:47:49Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"unsloth",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/tinyllama-chat",
"base_model:quantized:unsloth/tinyllama-chat",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-25T02:40:39Z |
---
base_model: unsloth/tinyllama-chat
library_name: transformers
model_name: e3e2072b-f7c2-4357-bf7c-feb72a43dfc6
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
- unsloth
licence: license
---
# Model Card for e3e2072b-f7c2-4357-bf7c-feb72a43dfc6
This model is a fine-tuned version of [unsloth/tinyllama-chat](https://huggingface.co/unsloth/tinyllama-chat).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sergioalves/e3e2072b-f7c2-4357-bf7c-feb72a43dfc6", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/n1905edd)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
NTIS/hf_gemma3_21-checkpoint-128000
|
NTIS
| 2025-06-25T02:44:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"pytorch",
"causal-lm",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T02:42:23Z |
---
license: apache-2.0
language:
- ko
- en
tags:
- text-generation
- pytorch
- causal-lm
library_name: transformers
---
# hf_gemma3_21-checkpoint-128000
์ด ๋ชจ๋ธ์ ํ์ธํ๋๋ ์ธ์ด ๋ชจ๋ธ ์ฒดํฌํฌ์ธํธ์
๋๋ค.
## ๋ชจ๋ธ ์ ๋ณด
- **๋ฒ ์ด์ค ๋ชจ๋ธ**: hf_gemma3_21
- **์ฒดํฌํฌ์ธํธ**: checkpoint-128000
- **ํ์
**: Causal Language Model
- **๋ผ์ด์ ์ค**: Apache 2.0
## ์ฌ์ฉ ๋ฐฉ๋ฒ
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "NTIS/hf_gemma3_21-checkpoint-128000"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# ํ
์คํธ ์์ฑ
text = "์๋
ํ์ธ์"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## ์ฃผ์์ฌํญ
- ์ด ๋ชจ๋ธ์ ์ฐ๊ตฌ/์คํ ๋ชฉ์ ์ผ๋ก ์ ๊ณต๋ฉ๋๋ค
- ์์
์ ์ฌ์ฉ ์ ์ ๋ผ์ด์ ์ค๋ฅผ ํ์ธํ์ธ์
|
yaobo2816/Qwen2.5-GRPO
|
yaobo2816
| 2025-06-25T02:44:26Z | 36 | 0 | null |
[
"gguf",
"qwen2",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:LooksJuicy/ruozhiba",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-3B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-05T16:25:05Z |
---
license: mit
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-3B-Instruct
datasets:
- LooksJuicy/ruozhiba
---
The model will have GRPO response, such like deepseek R1 answer the question.
|
Baistiac/llama2_uuu_news_qlora
|
Baistiac
| 2025-06-25T02:44:18Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:44:18Z |
---
license: apache-2.0
---
|
morning831/llama2_uuu_news_qlora
|
morning831
| 2025-06-25T02:43:28Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:43:28Z |
---
license: apache-2.0
---
|
crosstar/mistral_5_CoT_few_shot
|
crosstar
| 2025-06-25T02:41:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-25T02:38:56Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
morning831/uuu_fine_tune_taipower
|
morning831
| 2025-06-25T02:40:51Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:40:51Z |
---
license: apache-2.0
---
|
fancyerii/q-FrozenLake-v1-4x4-noSlippery
|
fancyerii
| 2025-06-25T02:40:19Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-25T02:40:17Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="fancyerii/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NTIS/hf_gemma3_21-checkpoint-126000
|
NTIS
| 2025-06-25T02:39:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"pytorch",
"causal-lm",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-25T02:37:16Z |
---
license: apache-2.0
language:
- ko
- en
tags:
- text-generation
- pytorch
- causal-lm
library_name: transformers
---
# hf_gemma3_21-checkpoint-126000
์ด ๋ชจ๋ธ์ ํ์ธํ๋๋ ์ธ์ด ๋ชจ๋ธ ์ฒดํฌํฌ์ธํธ์
๋๋ค.
## ๋ชจ๋ธ ์ ๋ณด
- **๋ฒ ์ด์ค ๋ชจ๋ธ**: hf_gemma3_21
- **์ฒดํฌํฌ์ธํธ**: checkpoint-126000
- **ํ์
**: Causal Language Model
- **๋ผ์ด์ ์ค**: Apache 2.0
## ์ฌ์ฉ ๋ฐฉ๋ฒ
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "NTIS/hf_gemma3_21-checkpoint-126000"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# ํ
์คํธ ์์ฑ
text = "์๋
ํ์ธ์"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## ์ฃผ์์ฌํญ
- ์ด ๋ชจ๋ธ์ ์ฐ๊ตฌ/์คํ ๋ชฉ์ ์ผ๋ก ์ ๊ณต๋ฉ๋๋ค
- ์์
์ ์ฌ์ฉ ์ ์ ๋ผ์ด์ ์ค๋ฅผ ํ์ธํ์ธ์
|
chinyua/test
|
chinyua
| 2025-06-25T02:38:58Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-25T02:38:58Z |
---
license: apache-2.0
---
|
sergioalves/9d73281b-01e3-4c0b-832d-ac9ed96b4bcb
|
sergioalves
| 2025-06-25T02:38:31Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:samoline/c69dcff1-fd86-4697-8038-846c5db9095b",
"base_model:adapter:samoline/c69dcff1-fd86-4697-8038-846c5db9095b",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-25T02:30:41Z |
---
library_name: peft
base_model: samoline/c69dcff1-fd86-4697-8038-846c5db9095b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9d73281b-01e3-4c0b-832d-ac9ed96b4bcb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: samoline/c69dcff1-fd86-4697-8038-846c5db9095b
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 28572ecc5c12c5f8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.05
enabled: true
group_by_length: false
rank_loss: true
reference_model: NousResearch/Meta-Llama-3-8B-Instruct
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 0.9
group_by_length: false
hub_model_id: sergioalves/9d73281b-01e3-4c0b-832d-ac9ed96b4bcb
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-05
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/28572ecc5c12c5f8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 383bde8b-0a10-4317-a5ad-edc0e1c7e587
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 383bde8b-0a10-4317-a5ad-edc0e1c7e587
warmup_steps: 10
weight_decay: 0.05
xformers_attention: false
```
</details><br>
# 9d73281b-01e3-4c0b-832d-ac9ed96b4bcb
This model is a fine-tuned version of [samoline/c69dcff1-fd86-4697-8038-846c5db9095b](https://huggingface.co/samoline/c69dcff1-fd86-4697-8038-846c5db9095b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0799
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3927 | 0.0002 | 1 | 1.1791 |
| 1.0764 | 0.0117 | 50 | 1.0865 |
| 1.2093 | 0.0235 | 100 | 1.0799 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ljnlonoljpiljm/siglip2-large-patch16-256-like-dislike-13
|
ljnlonoljpiljm
| 2025-06-25T02:38:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"siglip",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-25T02:37:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.