modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-06 06:27:01
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 542
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-06 06:26:44
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
AvenirInduction/model_movie_sentiment1
|
AvenirInduction
| 2025-08-11T18:45:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-11T18:44:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jeongseokoh/Llama3.1-8B-LatentRAG-batch-header_20st-og
|
jeongseokoh
| 2025-08-11T18:42:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T18:35:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MattBou00/mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-80
|
MattBou00
| 2025-08-11T18:41:27Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"region:us"
] | null | 2025-08-11T18:39:29Z |
# mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-80
This is a RLHF model checkpoint trained at epoch 80.
## Model Information
- **Base Model**: EleutherAI/pythia-1b
- **Reward Type**: irl
- **Dataset**: allenai/real-toxicity-prompts
- **Training Epoch**: 80
## IRL Configuration
- **Likelihood Type**: bradley_terry
- **Normalization Strategy**: none
- **IRL Artifact**: matthieubou-imperial-college-london/bayes_irl_vi/posterior_bradley_terry_05megofd:v0
- **Use Raw Score**: True
## Usage
This checkpoint can be loaded using the HuggingFace Transformers library:
```python
from transformers import AutoModelForCausalLM
from trl import AutoModelForCausalLMWithValueHead
# Load the checkpoint
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00/mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-80")
```
## Training Configuration
The training configuration is saved in `training_config.yaml`.
---
language: en
tags:
- rlhf
- checkpoint
- irl
- pythia-1b
library_name: transformers
pipeline_tag: text-generation
---
|
Leemonzz/ROSPRITE
|
Leemonzz
| 2025-08-11T18:37:09Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:calcuis/illustrious",
"base_model:adapter:calcuis/illustrious",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-08-11T18:15:11Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/60382464.jpeg
text: "UNICODE\0\0B\0F\01\0,\0 \01\0g\0i\0r\0l\0,\0 \0s\0o\0l\0o\0,\0 \0l\0o\0n\0g\0 \0h\0a\0i\0r\0,\0 \0b\0a\0n\0g\0s\0,\0 \0s\0k\0i\0r\0t\0,\0 \0s\0i\0m\0p\0l\0e\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0r\0e\0d\0 \0e\0y\0e\0s\0,\0 \0l\0o\0n\0g\0 \0s\0l\0e\0e\0v\0e\0s\0,\0 \0w\0h\0i\0t\0e\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0b\0o\0w\0,\0 \0h\0o\0l\0d\0i\0n\0g\0,\0 \0j\0e\0w\0e\0l\0r\0y\0,\0 \0s\0t\0a\0n\0d\0i\0n\0g\0,\0 \0f\0u\0l\0l\0 \0b\0o\0d\0y\0,\0 \0w\0e\0a\0p\0o\0n\0,\0 \0w\0h\0i\0t\0e\0 \0h\0a\0i\0r\0,\0 \0h\0a\0i\0r\0 \0b\0o\0w\0,\0 \0e\0a\0r\0r\0i\0n\0g\0s\0,\0 \0j\0a\0p\0a\0n\0e\0s\0e\0 \0c\0l\0o\0t\0h\0e\0s\0,\0 \0h\0o\0r\0n\0s\0,\0 \0p\0o\0i\0n\0t\0y\0 \0e\0a\0r\0s\0,\0 \0w\0i\0d\0e\0 \0s\0l\0e\0e\0v\0e\0s\0,\0 \0b\0l\0u\0n\0t\0 \0b\0a\0n\0g\0s\0,\0 \0k\0i\0m\0o\0n\0o\0,\0 \0c\0h\0i\0b\0i\0,\0 \0h\0o\0l\0d\0i\0n\0g\0 \0w\0e\0a\0p\0o\0n\0,\0 \0r\0e\0d\0 \0b\0o\0w\0,\0 \0s\0a\0s\0h\0,\0 \0m\0a\0s\0k\0,\0 \0c\0h\0a\0i\0n\0,\0 \0o\0b\0i\0,\0 \0s\0a\0n\0d\0a\0l\0s\0,\0 \0f\0i\0r\0e\0,\0 \0c\0u\0f\0f\0s\0,\0 \0o\0n\0i\0,\0 \0g\0e\0t\0a\0,\0 \0r\0e\0d\0 \0k\0i\0m\0o\0n\0o\0,\0 \0c\0l\0u\0b\0 \0(\0w\0e\0a\0p\0o\0n\0)\0,\0 \0s\0p\0i\0k\0e\0d\0 \0c\0l\0u\0b\0,\0 \0k\0a\0n\0a\0b\0o\0u\0,\0 \0R\0a\0g\0n\0a\0r\0o\0k\0 \0o\0n\0l\0i\0n\0e\0 \0c\0h\0a\0r\0a\0c\0t\0e\0r\0,\0B\0l\0a\0c\0k\0 \0f\0i\0l\0l\0e\0d\0 \0o\0v\0a\0l\0 \0e\0y\0e\0s\0,\0R\0O\0S\0P\0R\0I\0T\0E\0,\0S\0m\0o\0o\0t\0h\0 \0Q\0u\0a\0l\0i\0t\0y\0"
- output:
url: images/60436862.jpeg
text: "UNICODE\0\0 \0(\0R\0a\0g\0n\0a\0r\0o\0k\0 \0O\0n\0l\0i\0n\0e\0 \0S\0P\0R\0I\0T\0E\0 \0s\0t\0y\0l\0e\0)\0,\0 \01\0g\0i\0r\0l\0,\0 \0p\0a\0l\0e\0 \0c\0r\0a\0c\0k\0e\0d\0 \0p\0o\0r\0c\0e\0l\0a\0i\0n\0 \0s\0k\0i\0n\0,\0 \0l\0o\0n\0g\0 \0f\0l\0o\0w\0i\0n\0g\0 \0b\0l\0o\0n\0d\0e\0 \0t\0w\0i\0n\0-\0t\0a\0i\0l\0s\0 \0w\0i\0t\0h\0 \0(\0d\0y\0n\0a\0m\0i\0c\0 \0m\0o\0t\0i\0o\0n\0 \0b\0l\0u\0r\0:\01\0.\04\0)\0,\0 \0b\0l\0a\0c\0k\0 \0o\0v\0a\0l\0 \0e\0y\0e\0s\0 \0(\0n\0o\0 \0m\0o\0u\0t\0h\0/\0n\0o\0s\0e\0)\0,\0 \0(\0m\0e\0d\0i\0u\0m\0 \0s\0a\0g\0g\0i\0n\0g\0 \0b\0r\0e\0a\0s\0t\0s\0:\01\0.\02\0)\0,\0 \0(\0t\0o\0n\0e\0d\0 \0a\0t\0h\0l\0e\0t\0i\0c\0 \0b\0o\0d\0y\0)\0,\0 \0(\0s\0h\0o\0r\0t\0 \0g\0l\0o\0s\0s\0y\0 \0y\0e\0l\0l\0o\0w\0 \0l\0e\0a\0t\0h\0e\0r\0 \0j\0a\0c\0k\0e\0t\0 \0o\0p\0e\0n\0 \0r\0e\0v\0e\0a\0l\0i\0n\0g\0 \0l\0i\0g\0h\0t\0 \0b\0l\0u\0e\0 \0s\0l\0i\0n\0g\0s\0h\0o\0t\0 \0b\0i\0k\0i\0n\0i\0)\0,\0 \0b\0l\0a\0c\0k\0 \0p\0l\0e\0a\0t\0e\0d\0 \0m\0i\0n\0i\0 \0s\0k\0i\0r\0t\0 \0w\0i\0t\0h\0 \0y\0e\0l\0l\0o\0w\0 \0s\0t\0r\0i\0p\0e\0 \0d\0e\0t\0a\0i\0l\0s\0,\0 \0(\0s\0i\0l\0v\0e\0r\0 \0c\0o\0m\0b\0a\0t\0 \0b\0e\0l\0t\0 \0w\0i\0t\0h\0 \0g\0l\0o\0w\0i\0n\0g\0 \0b\0l\0u\0e\0 \0g\0e\0m\0s\0t\0o\0n\0e\0 \0e\0m\0i\0t\0t\0i\0n\0g\0 \0l\0i\0g\0h\0t\0n\0i\0n\0g\0:\01\0.\03\0)\0,\0 \0b\0l\0a\0c\0k\0 \0k\0n\0e\0e\0-\0h\0i\0g\0h\0 \0b\0o\0o\0t\0s\0 \0(\0y\0e\0l\0l\0o\0w\0 \0m\0e\0t\0a\0l\0l\0i\0c\0 \0t\0i\0p\0s\0)\0,\0 \0a\0r\0m\0o\0r\0e\0d\0 \0g\0a\0u\0n\0t\0l\0e\0t\0s\0,\0 \0(\0c\0r\0a\0c\0k\0l\0i\0n\0g\0 \0e\0l\0e\0c\0t\0r\0i\0c\0i\0t\0y\0 \0e\0f\0f\0e\0c\0t\0s\0)\0,\0 \0d\0y\0n\0a\0m\0i\0c\0 \0m\0i\0d\0-\0l\0e\0a\0p\0 \0b\0a\0t\0t\0l\0e\0 \0p\0o\0s\0e\0 \0(\0c\0r\0o\0u\0c\0h\0i\0n\0g\0 \0t\0o\0 \0s\0p\0r\0i\0n\0g\0)\0,\0 \0(\0n\0e\0o\0n\0 \0b\0l\0u\0e\0 \0e\0n\0e\0r\0g\0y\0 \0t\0r\0a\0i\0l\0s\0 \0f\0r\0o\0m\0 \0s\0l\0i\0n\0g\0s\0h\0o\0t\0)\0,\0 \0(\0c\0h\0i\0a\0r\0o\0s\0c\0u\0r\0o\0 \0l\0i\0g\0h\0t\0i\0n\0g\0)\0,\0 \0d\0a\0r\0k\0 \0c\0h\0a\0r\0c\0o\0a\0l\0 \0g\0r\0a\0d\0i\0e\0n\0t\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0(\0c\0h\0i\0b\0i\0-\0p\0r\0o\0p\0o\0r\0t\0i\0o\0n\0e\0d\0 \0a\0n\0a\0t\0o\0m\0y\0:\01\0.\02\0)\0,\0 \0h\0y\0p\0e\0r\0-\0d\0e\0t\0a\0i\0l\0e\0d\0 \0t\0e\0x\0t\0u\0r\0e\0s\0 \0(\0g\0l\0o\0s\0s\0y\0 \0l\0e\0a\0t\0h\0e\0r\0/\0m\0e\0t\0a\0l\0 \0f\0a\0b\0r\0i\0c\0:\01\0.\03\0)\0,\0 \0v\0i\0b\0r\0a\0n\0t\0 \0n\0e\0o\0n\0 \0b\0l\0u\0e\0 \0a\0n\0d\0 \0y\0e\0l\0l\0o\0w\0 \0c\0o\0l\0o\0r\0 \0s\0c\0h\0e\0m\0e\0,\0 \0(\0m\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0:\01\0.\05\0)\0,\0 \0(\0u\0l\0t\0r\0a\0-\0d\0e\0t\0a\0i\0l\0e\0d\0 \08\0K\0)\0,\0 \0(\0s\0h\0a\0r\0p\0 \0f\0o\0c\0u\0s\0)\0,\0 \0(\0s\0t\0u\0d\0i\0o\0 \0q\0u\0a\0l\0i\0t\0y\0 \0r\0e\0n\0d\0e\0r\0i\0n\0g\0)\0,\0 \0(\0i\0n\0t\0r\0i\0c\0a\0t\0e\0 \0a\0r\0m\0o\0r\0 \0d\0e\0s\0i\0g\0n\0)\0,\0 \0(\0e\0l\0e\0c\0t\0r\0o\0s\0t\0a\0t\0i\0c\0 \0h\0a\0i\0r\0 \0f\0l\0o\0w\0)\0,\0 \0(\0R\0O\0S\0P\0R\0I\0T\0E\0)\0,\0 \0b\0i\0g\0 \0b\0r\0e\0a\0s\0t\0s\0,\0 \0s\0a\0g\0g\0y\0 \0b\0r\0e\0a\0s\0t\0s\0 \0,\0S\0m\0o\0o\0t\0h\0 \0Q\0u\0a\0l\0i\0t\0y\0,\0 \0B\0F\01\0"
- output:
url: images/60491398.jpeg
text: "UNICODE\0\0 \01\0g\0i\0r\0l\0,\0 \0s\0o\0l\0o\0,\0 \0f\0u\0l\0l\0 \0b\0o\0d\0y\0,\0 \0m\0i\0s\0e\0r\0y\0d\0g\0,\0c\0 \0l\0o\0n\0g\0 \0h\0a\0i\0r\0,\0 \0b\0l\0o\0n\0d\0e\0 \0h\0a\0i\0r\0,\0 \0r\0e\0d\0 \0e\0y\0e\0s\0,\0 \0e\0l\0f\0,\0 \0p\0o\0i\0n\0t\0y\0 \0e\0a\0r\0s\0,\0 \0m\0u\0l\0t\0i\0c\0o\0l\0o\0r\0e\0d\0 \0h\0a\0i\0r\0,\0 \0s\0l\0i\0n\0g\0s\0h\0o\0t\0 \0s\0w\0i\0m\0s\0u\0i\0t\0,\0 \0c\0a\0p\0e\0,\0 \0f\0u\0r\0 \0t\0r\0i\0m\0,\0 \0o\0-\0r\0i\0n\0g\0,\0 \0t\0h\0i\0g\0h\0 \0b\0o\0o\0t\0s\0,\0 \0e\0l\0b\0o\0w\0 \0g\0l\0o\0v\0e\0s\0,\0 \0p\0u\0r\0p\0l\0e\0 \0g\0l\0o\0v\0e\0s\0,\0 \0B\0F\01\0,\0 \0R\0a\0g\0n\0a\0r\0o\0k\0 \0o\0n\0l\0i\0n\0e\0 \0c\0h\0a\0r\0a\0c\0t\0e\0r\0,\0 \0B\0l\0a\0c\0k\0 \0f\0i\0l\0l\0e\0d\0 \0o\0v\0a\0l\0 \0e\0y\0e\0s\0,\0 \0R\0O\0S\0P\0R\0I\0T\0E\0"
- output:
url: images/60693920.jpeg
text: "UNICODE\0\0 \0B\0F\01\0,\0M\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0,\0 \0u\0l\0t\0r\0a\0-\0d\0e\0t\0a\0i\0l\0e\0d\0,\0 \0i\0l\0l\0u\0s\0t\0r\0a\0t\0i\0o\0n\0,\0 \0h\0i\0g\0h\0 \0r\0e\0s\0o\0l\0u\0t\0i\0o\0n\0,\0 \0a\0n\0i\0m\0e\0 \0C\0G\0,\0 \0o\0f\0f\0i\0c\0i\0a\0l\0 \0a\0r\0t\0,\0 \0g\0a\0m\0e\0 \0c\0g\0,\0 \0u\0n\0i\0t\0y\0 \08\0k\0 \0w\0a\0l\0l\0p\0a\0p\0e\0r\0"
- output:
url: images/60782710.jpeg
text: "UNICODE\0\0 \0(\0R\0O\0S\0P\0R\0I\0T\0E\0,\0 \0R\0a\0g\0n\0a\0r\0o\0k\0 \0o\0n\0l\0i\0n\0e\0 \0c\0h\0a\0r\0a\0c\0t\0e\0r\0,\0 \0B\0l\0a\0c\0k\0 \0f\0i\0l\0l\0e\0d\0 \0o\0v\0a\0l\0 \0e\0y\0e\0s\0,\0 \0n\0o\0 \0m\0o\0u\0t\0h\0,\0 \0n\0o\0 \0n\0o\0s\0e\0)\0,\0 \0B\0F\01\0,\0 \0F\0u\0l\0l\0 \0b\0o\0d\0y\0,\0 \0s\0o\0l\0o\0,\0 \0m\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0,\0 \0g\0o\0o\0d\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0s\0h\0a\0d\0o\0w\0,\0 \0b\0a\0c\0k\0l\0i\0g\0h\0t\0i\0n\0g\0,\0 \0b\0e\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0u\0l\0t\0r\0a\0 \0d\0e\0t\0a\0i\0l\0e\0d\0,\0 \0 \0h\0e\0a\0v\0y\0 \0r\0o\0c\0k\0e\0r\0 \0t\0h\0e\0m\0e\0d\0,\0 \0s\0u\0n\0 \0g\0l\0a\0s\0s\0e\0s\0,\0 \0b\0e\0s\0t\0 \0i\0l\0l\0u\0s\0t\0r\0a\0t\0i\0o\0n\0,\0 \0h\0i\0g\0h\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0a\0b\0s\0u\0r\0d\0,\0 \0d\0e\0t\0a\0i\0l\0e\0d\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0h\0i\0g\0h\0l\0y\0 \0a\0e\0s\0t\0h\0e\0t\0i\0c\0,\0 \0h\0i\0g\0h\0l\0y\0 \0d\0e\0t\0a\0i\0l\0e\0d\0,\0 \0h\0i\0g\0h\0 \0r\0e\0s\0o\0l\0u\0t\0i\0o\0n\0,\0 \0e\0p\0i\0c\0,\0 \0o\0f\0f\0i\0c\0i\0a\0l\0,\0 \0l\0o\0o\0k\0i\0n\0g\0 \0a\0t\0 \0v\0i\0e\0w\0e\0r\0,\0 \0h\0o\0l\0d\0i\0n\0g\0,\0 \0h\0o\0l\0d\0i\0n\0g\0 \0w\0e\0a\0p\0o\0n\0,\0 \0B\0l\0a\0c\0k\0 \0b\0e\0l\0t\0,\0 \0Y\0a\0k\0u\0z\0a\0 \0i\0n\0s\0p\0i\0r\0e\0d\0,\0 \0m\0a\0s\0s\0i\0v\0e\0 \0b\0a\0s\0e\0b\0a\0l\0l\0 \0b\0a\0t\0,\0 \0f\0l\0a\0m\0i\0n\0g\0 \0b\0a\0t\0,\0 \0l\0i\0p\0s\0 \0p\0a\0r\0t\0e\0d\0,\0 \0c\0i\0g\0a\0r\0e\0t\0t\0e\0 \0i\0n\0 \0m\0o\0u\0t\0h\0,\0 \0t\0e\0e\0t\0h\0,\0 \0s\0t\0a\0n\0d\0i\0n\0g\0,\0 \0f\0u\0l\0l\0 \0v\0i\0e\0w\0,\0 \0c\0u\0t\0e\0 \0p\0o\0s\0e\0,\0 \0o\0r\0i\0e\0n\0t\0a\0l\0 \0f\0e\0n\0c\0i\0n\0g\0,\0 \0 \0d\0a\0r\0k\0 \0t\0h\0e\0m\0e\0,\0 \01\0g\0i\0r\0l\0,\0 \0s\0o\0l\0o\0,\0 \0r\0e\0d\0 \0f\0i\0r\0e\0 \0t\0r\0a\0i\0l\0'\0s\0 \0o\0f\0 \0p\0o\0w\0e\0r\0 \0,\0a\0l\0o\0n\0e\0,\0 \0K\0a\0m\0i\0m\0u\0r\0a\0 \0A\0z\0u\0m\0a\0,\0 \0l\0o\0n\0g\0 \0h\0a\0i\0r\0,\0 \0o\0r\0a\0n\0g\0e\0 \0h\0a\0i\0r\0,\0 \0p\0o\0n\0y\0t\0a\0i\0l\0,\0 \0l\0i\0p\0s\0,\0 \0l\0a\0r\0g\0e\0 \0b\0r\0e\0a\0s\0t\0s\0,\0 \0r\0e\0v\0e\0a\0l\0i\0n\0g\0 \0c\0l\0o\0t\0h\0e\0s\0,\0 \0c\0r\0o\0p\0p\0e\0d\0 \0m\0i\0d\0r\0i\0f\0f\0 \0r\0e\0d\0 \0j\0a\0c\0k\0e\0t\0 \0W\0h\0i\0t\0 \0m\0e\0t\0a\0l\0l\0i\0c\0 \0d\0e\0c\0o\0r\0a\0t\0i\0o\0n\0s\0,\0 \0 \0h\0u\0g\0e\0 \0c\0l\0e\0a\0v\0a\0g\0e\0,\0 \0c\0y\0a\0n\0 \0l\0e\0o\0t\0a\0r\0d\0 \0,\0 \0h\0i\0g\0h\0l\0e\0g\0 \0l\0e\0o\0t\0a\0r\0d\0,\0R\0O\0S\0P\0R\0I\0T\0E\0,\0 \0B\0l\0a\0c\0k\0 \0f\0i\0l\0l\0e\0d\0 \0o\0v\0a\0l\0 \0e\0y\0e\0s\0,\0 \0R\0a\0g\0n\0a\0r\0o\0k\0 \0o\0n\0l\0i\0n\0e\0 \0c\0h\0a\0r\0a\0c\0t\0e\0r\0,\0"
- output:
url: images/61403429.jpeg
text: "UNICODE\0\0 \0 \0M\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0,\0 \0p\0e\0r\0s\0i\0s\0t\0e\0n\0t\0,\0 \0c\0o\0h\0e\0r\0e\0n\0t\0,\0 \0c\0o\0n\0s\0i\0s\0t\0e\0n\0t\0,\0 \01\0g\0i\0r\0l\0,\0 \02\0D\0-\0H\0D\0 \0s\0t\0y\0l\0e\0,\0 \01\0g\0i\0r\0l\0,\0 \0f\0u\0l\0l\0 \0b\0o\0d\0y\0,\0 \0"
- output:
url: images/61596324.jpeg
text: "UNICODE\0\0 \0P\0i\0x\0e\0l\0 \0a\0r\0t\0,\0 \0S\0i\0m\0p\0l\0e\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0w\0h\0i\0t\0e\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0d\0i\0r\0t\0y\0,\0 \0"
- output:
url: images/MSN1PGZ7E5F8W5G2F0ADBR61S0.jpeg
text: "UNICODE\0\0 \01\0g\0i\0r\0l\0,\0 \0E\0l\0v\0e\0n\0 \0F\0a\0r\0m\0h\0a\0n\0d\0,\0 \0f\0u\0l\0l\0-\0b\0o\0d\0y\0 \0p\0o\0r\0t\0r\0a\0i\0t\0,\0 \0g\0e\0n\0t\0l\0e\0 \0c\0o\0u\0n\0t\0r\0y\0s\0i\0d\0e\0 \0m\0o\0r\0n\0i\0n\0g\0 \0p\0o\0s\0e\0,\0 \0B\0l\0i\0z\0z\0a\0r\0d\0 \0C\0i\0n\0e\0m\0a\0t\0i\0c\0 \0R\0e\0n\0d\0e\0r\0 \0s\0t\0y\0l\0e\0,\0 \08\0k\0 \0r\0u\0s\0t\0i\0c\0 \0t\0e\0x\0t\0u\0r\0e\0s\0,\0 \0E\0l\0v\0e\0n\0 \0A\0g\0r\0a\0r\0i\0a\0n\0 \0๏ฟฝ\0 \0P\0a\0s\0t\0o\0r\0a\0l\0 \0H\0a\0r\0m\0o\0n\0y\0 \0a\0e\0s\0t\0h\0e\0t\0i\0c\0,\0 \0g\0o\0l\0d\0e\0n\0-\0b\0l\0o\0n\0d\0e\0 \0w\0a\0i\0s\0t\0-\0l\0e\0n\0g\0t\0h\0 \0b\0r\0a\0i\0d\0e\0d\0 \0h\0a\0i\0r\0 \0w\0i\0t\0h\0 \0f\0l\0o\0w\0e\0r\0 \0a\0d\0o\0r\0n\0m\0e\0n\0t\0s\0 \0๏ฟฝ\0 \0s\0i\0l\0k\0 \0r\0i\0b\0b\0o\0n\0 \0d\0e\0t\0a\0i\0l\0s\0,\0 \0b\0r\0i\0g\0h\0t\0 \0e\0m\0e\0r\0a\0l\0d\0 \0e\0y\0e\0s\0 \0w\0i\0t\0h\0 \0s\0o\0f\0t\0 \0s\0u\0n\0-\0k\0i\0s\0s\0e\0d\0 \0g\0l\0o\0w\0,\0 \0s\0l\0e\0n\0d\0e\0r\0 \0y\0e\0t\0 \0t\0o\0n\0e\0d\0 \0b\0u\0i\0l\0d\0,\0 \0f\0a\0i\0r\0 \0s\0k\0i\0n\0 \0w\0i\0t\0h\0 \0f\0a\0i\0n\0t\0 \0t\0r\0i\0b\0a\0l\0 \0f\0r\0e\0c\0k\0l\0e\0s\0 \0๏ฟฝ\0 \0n\0a\0t\0u\0r\0a\0l\0 \0b\0e\0a\0u\0t\0y\0 \0m\0a\0r\0k\0s\0,\0 \0w\0e\0a\0r\0i\0n\0g\0 \0s\0i\0m\0p\0l\0e\0 \0l\0i\0n\0e\0n\0 \0b\0l\0o\0u\0s\0e\0 \0w\0i\0t\0h\0 \0r\0o\0l\0l\0e\0d\0-\0u\0p\0 \0s\0l\0e\0e\0v\0e\0s\0 \0๏ฟฝ\0 \0e\0a\0r\0t\0h\0-\0t\0o\0n\0e\0d\0 \0c\0o\0r\0s\0e\0t\0 \0d\0r\0e\0s\0s\0,\0 \0w\0o\0v\0e\0n\0 \0s\0t\0r\0a\0w\0 \0h\0a\0t\0 \0w\0i\0t\0h\0 \0f\0e\0a\0t\0h\0e\0r\0 \0c\0h\0a\0r\0m\0,\0 \0s\0t\0u\0r\0d\0y\0 \0l\0e\0a\0t\0h\0e\0r\0 \0b\0o\0o\0t\0s\0 \0w\0i\0t\0h\0 \0d\0u\0s\0t\0 \0m\0a\0r\0k\0s\0,\0 \0h\0o\0l\0d\0i\0n\0g\0 \0w\0o\0o\0d\0e\0n\0 \0b\0u\0c\0k\0e\0t\0 \0w\0i\0t\0h\0 \0f\0r\0e\0s\0h\0 \0p\0r\0o\0d\0u\0c\0e\0 \0๏ฟฝ\0 \0h\0a\0n\0d\0w\0o\0v\0e\0n\0 \0b\0a\0s\0k\0e\0t\0,\0 \0i\0n\0t\0r\0i\0c\0a\0t\0e\0 \0f\0l\0o\0r\0a\0l\0 \0e\0m\0b\0r\0o\0i\0d\0e\0r\0y\0 \0p\0a\0t\0t\0e\0r\0n\0s\0 \0w\0i\0t\0h\0 \0e\0l\0v\0e\0n\0 \0s\0c\0r\0i\0p\0t\0 \0๏ฟฝ\0 \0n\0a\0t\0u\0r\0e\0 \0s\0i\0g\0i\0l\0s\0,\0 \0T\0h\0r\0e\0e\0 \0B\0r\0e\0a\0s\0t\0s\0 \0v\0i\0s\0i\0b\0l\0y\0 \0e\0n\0h\0a\0n\0c\0e\0d\0 \0w\0i\0t\0h\0 \0s\0o\0f\0t\0 \0n\0a\0t\0u\0r\0a\0l\0 \0c\0u\0r\0v\0e\0s\0,\0 \0T\0r\0i\0b\0r\0e\0a\0s\0t\0s\0 \0a\0n\0a\0t\0o\0m\0i\0c\0a\0l\0 \0r\0e\0a\0l\0i\0s\0m\0,\0 \0R\0a\0g\0n\0a\0r\0o\0k\0 \0O\0n\0l\0i\0n\0e\0 \0๏ฟฝ\0 \0W\0o\0W\0 \0c\0r\0o\0s\0s\0o\0v\0e\0r\0 \0c\0o\0n\0c\0e\0p\0t\0,\0 \0R\0O\0S\0P\0R\0I\0T\0E\0 \0H\0D\0 \0d\0e\0t\0a\0i\0l\0i\0n\0g\0,\0 \0s\0i\0m\0p\0l\0e\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0w\0h\0i\0t\0e\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0v\0o\0l\0u\0p\0t\0u\0o\0u\0s\0,\0 \0b\0i\0g\0 \0b\0r\0e\0a\0s\0t\0s\0 \0r\0e\0v\0e\0a\0l\0i\0n\0g\0 \0"
base_model: calcuis/illustrious
instance_prompt: style, pixel art, ragnarok online
license: apache-2.0
---
# RAGNAROK ONLINE - SPRITE STYLE <pixel art>
<Gallery />
## Model description
ยกPresentamos nuestro modelo LoRA de sprites para Ragnarok Online en Citivai! ๐ฎโจ Con mรกs de 190 imรกgenes de alta calidad, es perfecto para los fans y creadores que buscan llevar su creatividad al siguiente nivel. โ๏ธ
รnete y colabora con otros apasionados de Ragnarok Online en Citivai. ยกJuntos podemos hacer crecer esta colecciรณn รฉpica!
## Trigger words
You should use `style` to trigger the image generation.
You should use `pixel art` to trigger the image generation.
You should use `ragnarok online` to trigger the image generation.
## Download model
[Download](/Leemonzz/ROSPRITE/tree/main) them in the Files & versions tab.
|
MattBou00/mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-60
|
MattBou00
| 2025-08-11T18:35:22Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"region:us"
] | null | 2025-08-11T18:33:12Z |
# mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-60
This is a RLHF model checkpoint trained at epoch 60.
## Model Information
- **Base Model**: EleutherAI/pythia-1b
- **Reward Type**: irl
- **Dataset**: allenai/real-toxicity-prompts
- **Training Epoch**: 60
## IRL Configuration
- **Likelihood Type**: bradley_terry
- **Normalization Strategy**: none
- **IRL Artifact**: matthieubou-imperial-college-london/bayes_irl_vi/posterior_bradley_terry_05megofd:v0
- **Use Raw Score**: True
## Usage
This checkpoint can be loaded using the HuggingFace Transformers library:
```python
from transformers import AutoModelForCausalLM
from trl import AutoModelForCausalLMWithValueHead
# Load the checkpoint
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00/mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-60")
```
## Training Configuration
The training configuration is saved in `training_config.yaml`.
---
language: en
tags:
- rlhf
- checkpoint
- irl
- pythia-1b
library_name: transformers
pipeline_tag: text-generation
---
|
Tanny1412/20b-gptoss-multilingual
|
Tanny1412
| 2025-08-11T18:34:09Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-08-11T18:18:28Z |
# 20B GPT-OSS Multilingual Fine-tuned Model
This is a fine-tuned version of **unsloth/gpt-oss-20b** for multilingual reasoning tasks.
The model has been fine-tuned using [Unsloth](https://github.com/unslothai/unsloth) on a custom dataset for reasoning in multiple languages.
## Model Details
- **Base model:** unsloth/gpt-oss-20b
- **Fine-tuning method:** LoRA (4-bit quantization)
- **Max sequence length:** 4096
- **Languages:** English, French, Spanish, and more
## Training
- **Framework:** PyTorch + Transformers + Unsloth
- **Dataset format:** ShareGPT โ Harmony format using `apply_chat_template`
- **Epochs:** 1
- **Batch size:** 16 total (4 ร 4 gradient accumulation)
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "Tanny1412/20b-gptoss-multilingual"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
giovannidemuri/llama3b-llamab8-er-afg-v15-seed2-french-alpaca-fpt
|
giovannidemuri
| 2025-08-11T18:34:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T17:23:20Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-3B
tags:
- generated_from_trainer
model-index:
- name: llama3b-llamab8-er-afg-v15-seed2-french-alpaca-fpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3b-llamab8-er-afg-v15-seed2-french-alpaca-fpt
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu128
- Datasets 3.6.0
- Tokenizers 0.21.2
|
zelk12/MT-Gen3_gemma-3-12B
|
zelk12
| 2025-08-11T18:31:27Z | 0 | 0 | null |
[
"safetensors",
"gemma3",
"merge",
"mergekit",
"lazymergekit",
"IlyaGusev/saiga_gemma3_12b",
"zelk12/MT1-gemma-3-12B",
"soob3123/amoral-gemma3-12B-v2",
"zelk12/MT-Gen1-gemma-3-12B",
"zelk12/MT-gemma-3-12B",
"image-text-to-text",
"conversational",
"base_model:IlyaGusev/saiga_gemma3_12b",
"base_model:merge:IlyaGusev/saiga_gemma3_12b",
"base_model:soob3123/amoral-gemma3-12B-v2",
"base_model:merge:soob3123/amoral-gemma3-12B-v2",
"base_model:zelk12/MT-Gen1-gemma-3-12B",
"base_model:merge:zelk12/MT-Gen1-gemma-3-12B",
"base_model:zelk12/MT-gemma-3-12B",
"base_model:merge:zelk12/MT-gemma-3-12B",
"base_model:zelk12/MT1-gemma-3-12B",
"base_model:merge:zelk12/MT1-gemma-3-12B",
"license:gemma",
"region:us"
] |
image-text-to-text
| 2025-08-11T16:53:37Z |
---
base_model:
- IlyaGusev/saiga_gemma3_12b
- zelk12/MT1-gemma-3-12B
- soob3123/amoral-gemma3-12B-v2
- zelk12/MT-Gen1-gemma-3-12B
- zelk12/MT-gemma-3-12B
tags:
- merge
- mergekit
- lazymergekit
- IlyaGusev/saiga_gemma3_12b
- zelk12/MT1-gemma-3-12B
- soob3123/amoral-gemma3-12B-v2
- zelk12/MT-Gen1-gemma-3-12B
- zelk12/MT-gemma-3-12B
license: gemma
pipeline_tag: image-text-to-text
---
# MT-Gen3_gemma-3-12B
MT-Gen3_gemma-3-12B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [IlyaGusev/saiga_gemma3_12b](https://huggingface.co/IlyaGusev/saiga_gemma3_12b)
* [zelk12/MT1-gemma-3-12B](https://huggingface.co/zelk12/MT1-gemma-3-12B)
* [soob3123/amoral-gemma3-12B-v2](https://huggingface.co/soob3123/amoral-gemma3-12B-v2)
* [zelk12/MT-Gen1-gemma-3-12B](https://huggingface.co/zelk12/MT-Gen1-gemma-3-12B)
* [zelk12/MT-gemma-3-12B](https://huggingface.co/zelk12/MT-gemma-3-12B)
## ๐งฉ Configuration
```yaml
models:
- model: TheDrummer/Fallen-Gemma3-12B-v1
#no parameters necessary for base model
- model: IlyaGusev/saiga_gemma3_12b
parameters:
density: 0.5
weight: 0.5
- model: zelk12/MT1-gemma-3-12B
parameters:
density: 0.507
weight: 0.5
- model: soob3123/amoral-gemma3-12B-v2
parameters:
density: 0.615
weight: 0.5
- model: zelk12/MT-Gen1-gemma-3-12B
parameters:
density: 0.781
weight: 0.5
- model: zelk12/MT-gemma-3-12B
parameters:
density: 0.8
weight: 0.5
merge_method: dare_ties
base_model: TheDrummer/Fallen-Gemma3-12B-v1
parameters:
normalize: true
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "zelk12/MT-Gen3_gemma-3-12B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
ajjyy/Qwen2-0.5B-GRPO-Curiosity-attempt1-cp2330
|
ajjyy
| 2025-08-11T18:30:56Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"grpo",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T15:05:58Z |
---
base_model: Qwen/Qwen2-0.5B-Instruct
library_name: transformers
model_name: Qwen2-0.5B-GRPO-Curiosity-attempt1-cp2330
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for Qwen2-0.5B-GRPO-Curiosity-attempt1-cp2330
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ajjyy/Qwen2-0.5B-GRPO-Curiosity-attempt1-cp2330", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ajyang-massachusetts-institute-of-technology/gsm8k_grpo_curiosity/runs/q9iynhyp)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.20.0.dev0
- Transformers: 4.53.3
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
MattBou00/mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-40
|
MattBou00
| 2025-08-11T18:28:40Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"region:us"
] | null | 2025-08-11T18:26:49Z |
# mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-40
This is a RLHF model checkpoint trained at epoch 40.
## Model Information
- **Base Model**: EleutherAI/pythia-1b
- **Reward Type**: irl
- **Dataset**: allenai/real-toxicity-prompts
- **Training Epoch**: 40
## IRL Configuration
- **Likelihood Type**: bradley_terry
- **Normalization Strategy**: none
- **IRL Artifact**: matthieubou-imperial-college-london/bayes_irl_vi/posterior_bradley_terry_05megofd:v0
- **Use Raw Score**: True
## Usage
This checkpoint can be loaded using the HuggingFace Transformers library:
```python
from transformers import AutoModelForCausalLM
from trl import AutoModelForCausalLMWithValueHead
# Load the checkpoint
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00/mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-40")
```
## Training Configuration
The training configuration is saved in `training_config.yaml`.
---
language: en
tags:
- rlhf
- checkpoint
- irl
- pythia-1b
library_name: transformers
pipeline_tag: text-generation
---
|
manancode/opus-mt-sv-ZH-ctranslate2-android
|
manancode
| 2025-08-11T18:27:54Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-11T18:27:41Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-sv-ZH-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-sv-ZH` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-sv-ZH
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Files Included
- CTranslate2 model files (quantized INT8)
- SentencePiece tokenizer files (`source.spm`, `target.spm`)
- Integration guide for Android deployment
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
### Android Integration
See the included `INTEGRATION_GUIDE.txt` for Android implementation details.
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-st-fr-ctranslate2-android
|
manancode
| 2025-08-11T18:27:00Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-11T18:26:46Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-st-fr-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-st-fr` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-st-fr
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Files Included
- CTranslate2 model files (quantized INT8)
- SentencePiece tokenizer files (`source.spm`, `target.spm`)
- Integration guide for Android deployment
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
### Android Integration
See the included `INTEGRATION_GUIDE.txt` for Android implementation details.
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754936743
|
RMCian
| 2025-08-11T18:26:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:26:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
manancode/opus-mt-st-en-ctranslate2-android
|
manancode
| 2025-08-11T18:26:04Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-11T18:25:50Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-st-en-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-st-en` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-st-en
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Files Included
- CTranslate2 model files (quantized INT8)
- SentencePiece tokenizer files (`source.spm`, `target.spm`)
- Integration guide for Android deployment
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
### Android Integration
See the included `INTEGRATION_GUIDE.txt` for Android implementation details.
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-ss-en-ctranslate2-android
|
manancode
| 2025-08-11T18:25:28Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-11T18:25:15Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-ss-en-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-ss-en` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-ss-en
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Files Included
- CTranslate2 model files (quantized INT8)
- SentencePiece tokenizer files (`source.spm`, `target.spm`)
- Integration guide for Android deployment
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
### Android Integration
See the included `INTEGRATION_GUIDE.txt` for Android implementation details.
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-srn-es-ctranslate2-android
|
manancode
| 2025-08-11T18:24:22Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-11T18:24:07Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-srn-es-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-srn-es` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-srn-es
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Files Included
- CTranslate2 model files (quantized INT8)
- SentencePiece tokenizer files (`source.spm`, `target.spm`)
- Integration guide for Android deployment
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
### Android Integration
See the included `INTEGRATION_GUIDE.txt` for Android implementation details.
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754936603
|
RMCian
| 2025-08-11T18:24:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:23:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MattBou00/mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-20
|
MattBou00
| 2025-08-11T18:22:47Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"region:us"
] | null | 2025-08-11T18:21:01Z |
# mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-20
This is a RLHF model checkpoint trained at epoch 20.
## Model Information
- **Base Model**: EleutherAI/pythia-1b
- **Reward Type**: irl
- **Dataset**: allenai/real-toxicity-prompts
- **Training Epoch**: 20
## IRL Configuration
- **Likelihood Type**: bradley_terry
- **Normalization Strategy**: none
- **IRL Artifact**: matthieubou-imperial-college-london/bayes_irl_vi/posterior_bradley_terry_05megofd:v0
- **Use Raw Score**: True
## Usage
This checkpoint can be loaded using the HuggingFace Transformers library:
```python
from transformers import AutoModelForCausalLM
from trl import AutoModelForCausalLMWithValueHead
# Load the checkpoint
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00/mq028hjz-rlhf-checkpoint-pythia-1b-irl-epoch-20")
```
## Training Configuration
The training configuration is saved in `training_config.yaml`.
---
language: en
tags:
- rlhf
- checkpoint
- irl
- pythia-1b
library_name: transformers
pipeline_tag: text-generation
---
|
manancode/opus-mt-sq-en-ctranslate2-android
|
manancode
| 2025-08-11T18:22:34Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-11T18:22:21Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-sq-en-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-sq-en` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-sq-en
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Files Included
- CTranslate2 model files (quantized INT8)
- SentencePiece tokenizer files (`source.spm`, `target.spm`)
- Integration guide for Android deployment
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
### Android Integration
See the included `INTEGRATION_GUIDE.txt` for Android implementation details.
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
manancode/opus-mt-sn-fr-ctranslate2-android
|
manancode
| 2025-08-11T18:21:58Z | 0 | 0 | null |
[
"translation",
"opus-mt",
"ctranslate2",
"quantized",
"multilingual",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-08-11T18:21:28Z |
---
license: apache-2.0
tags:
- translation
- opus-mt
- ctranslate2
- quantized
language:
- multilingual
pipeline_tag: translation
---
# opus-mt-sn-fr-ctranslate2-android
This is a quantized INT8 version of `Helsinki-NLP/opus-mt-sn-fr` converted to CTranslate2 format for efficient inference.
## Model Details
- **Original Model**: Helsinki-NLP/opus-mt-sn-fr
- **Format**: CTranslate2
- **Quantization**: INT8
- **Framework**: OPUS-MT
- **Converted by**: Automated conversion pipeline
## Files Included
- CTranslate2 model files (quantized INT8)
- SentencePiece tokenizer files (`source.spm`, `target.spm`)
- Integration guide for Android deployment
## Usage
### With CTranslate2
```python
import ctranslate2
import sentencepiece as spm
# Load the model
translator = ctranslate2.Translator("path/to/model")
# Load tokenizers
sp_source = spm.SentencePieceProcessor(model_file="source.spm")
sp_target = spm.SentencePieceProcessor(model_file="target.spm")
# Translate
source_tokens = sp_source.encode("Your text here", out_type=str)
results = translator.translate_batch([source_tokens])
translation = sp_target.decode(results[0].hypotheses[0])
```
### Android Integration
See the included `INTEGRATION_GUIDE.txt` for Android implementation details.
## Performance
This INT8 quantized version provides:
- ~75% reduction in model size
- Faster inference speed
- Maintained translation quality
- Mobile-friendly deployment
## Original Model
Based on the OPUS-MT project: https://github.com/Helsinki-NLP/Opus-MT
|
milliarderdol/blockassist-bc-roaring_rough_scorpion_1754934541
|
milliarderdol
| 2025-08-11T18:21:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring rough scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:20:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring rough scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
uniswap/blockassist-bc-soaring_rough_bear_1754936306
|
uniswap
| 2025-08-11T18:20:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"soaring rough bear",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:20:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- soaring rough bear
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kayacrypto/blockassist-bc-thriving_barky_wolf_1754936338
|
kayacrypto
| 2025-08-11T18:20:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thriving barky wolf",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:20:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thriving barky wolf
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bambangbukan/blockassist-bc-singing_burrowing_chicken_1754936345
|
bambangbukan
| 2025-08-11T18:20:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"singing burrowing chicken",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:20:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- singing burrowing chicken
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
D1zzYzz/GRIT-GSM8K-QLORA-llama-3.1-8B-Energy-0.9
|
D1zzYzz
| 2025-08-11T18:19:33Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"alpaca",
"grit",
"lora",
"qlora",
"instruction-tuning",
"fine-tuned",
"text-generation",
"en",
"dataset:openai/gsm8k",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-11T18:19:22Z |
---
tags:
- llama
- alpaca
- grit
- lora
- qlora
- instruction-tuning
- fine-tuned
base_model: meta-llama/Llama-3.1-8B
library_name: peft
license: apache-2.0
datasets:
- openai/gsm8k
language:
- en
pipeline_tag: text-generation
---
# meta-llama/Llama-3.1-8B Fine-tuned with GRIT and QLoRA
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) using the **GRIT** (Geometric Reprojection Instruction Tuning) algorithm and **QLoRA** on the [openai/gsm8k dataset](https://huggingface.co/datasets/openai/gsm8k).
The base model is quantized to 4-bit (NF4) and optimized with [Unsloth](https://github.com/unslothai/unsloth) to enable efficient fine-tuning.
## ๐ Training Details
### GRIT Algorithm
- **K-FAC Updates**: Every 20 steps (adaptive) for second-order preconditioning.
- **Neural Reprojection**: Every 20 steps (adaptive) for rank optimization.
- **Rank Adaptation**: Enabled (Threshold: 0.9, Min Rank: 4).
- **Optimized LoRA Modules**: ['q_proj', 'k_proj', 'v_proj', 'o_proj', 'up_proj', 'down_proj', 'gate_proj']
### Fine-tuning Configuration
- **Base Model**: meta-llama/Llama-3.1-8B
- **Quantization**: 4-bit (NF4) with bf16 compute.
- **LoRA Rank**: 32
- **LoRA Alpha**: 64
- **Batch Size**: 8 (per device)
- **Gradient Accumulation**: 2 (Effective batch = 16)
- **Learning Rate**: 1.0e-04
- **Precision**: bf16 mixed precision
- **Sequence Length**: 1024 tokens
- **Gradient Checkpointing**: Enabled
### Performance Improvements
- โ
**Faster Convergence**: K-FAC preconditioning aligns updates with curvature.
- โ
**Memory-Efficient**: 4-bit quantization (QLoRA) and gradient checkpointing used.
- โ
**Adaptive Rank**: Dynamically prunes LoRA rank to improve parameter efficiency.
## ๐ Training Metrics
- **Total Steps**: 936
- **Final Loss**: 0.8789392291990101
- **Trainable Params**: 83,886,080
## ๐ Algorithm Details
- **K-FAC Preconditioning** (Natural Gradient) and **Neural Reprojection** as per GRIT method.
- **Memory Efficient**: Covariance matrices on CPU to reduce GPU load.
## ๐ Results
In benchmark comparisons, GRIT has shown **faster convergence and better stability** than standard LoRA or fine-tuning, making it well-suited for efficient single-epoch training. The use of Unsloth further accelerates this process.
## ๐ Citation
If you use this model, please cite the original GRIT paper and:
```bibtex
@misc{grit-lora-Llama-3.1-8B-gsm8k},
title={ meta-llama/Llama-3.1-8B Fine-tuned with GRIT on openai/gsm8k },
author={D1zzYzz},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/D1zzYzz/GRIT-GSM8K-QLORA-llama-3.1-8B-Energy-0.9}
}
```
## โ๏ธ License
This model inherits the Apache 2.0 license.
|
acidjp/blockassist-bc-pesty_extinct_prawn_1754935749
|
acidjp
| 2025-08-11T18:16:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pesty extinct prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:15:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pesty extinct prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
motza0025/blockassist-bc-bristly_monstrous_eel_1754935021
|
motza0025
| 2025-08-11T18:16:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bristly monstrous eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:15:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bristly monstrous eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754935754
|
RMCian
| 2025-08-11T18:09:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:09:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
parkky21/lorpheus-hi-ft-1e
|
parkky21
| 2025-08-11T18:07:02Z | 0 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:canopylabs/3b-hi-pretrain-research_release",
"base_model:finetune:canopylabs/3b-hi-pretrain-research_release",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T18:06:56Z |
---
base_model: canopylabs/3b-hi-pretrain-research_release
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** parkky21
- **License:** apache-2.0
- **Finetuned from model :** canopylabs/3b-hi-pretrain-research_release
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CycloneDX/cdx1-nano-mlx-8bit
|
CycloneDX
| 2025-08-11T18:06:27Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"unsloth",
"text-generation",
"conversational",
"base_model:unsloth/Qwen3-1.7B",
"base_model:quantized:unsloth/Qwen3-1.7B",
"8-bit",
"region:us"
] |
text-generation
| 2025-08-11T11:39:11Z |
---
tags:
- unsloth
- mlx
base_model: unsloth/Qwen3-1.7B
pipeline_tag: text-generation
library_name: mlx
---
|
ErisGrey/orpheus-3b-0.1-ft-lora-ft_20250811_213044
|
ErisGrey
| 2025-08-11T18:03:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/orpheus-3b-0.1-ft",
"lora",
"transformers",
"unsloth",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:unsloth/orpheus-3b-0.1-ft",
"region:us"
] |
text-generation
| 2025-08-11T18:01:06Z |
---
base_model: unsloth/orpheus-3b-0.1-ft
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/orpheus-3b-0.1-ft
- lora
- transformers
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754935287
|
RMCian
| 2025-08-11T18:01:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T18:01:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ucfc2024/maxielmonsalve014
|
ucfc2024
| 2025-08-11T18:01:38Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-08-11T17:22:47Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754935018
|
RMCian
| 2025-08-11T17:57:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T17:57:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ahs95/sentiment-sarcasm-detection-BanglaBERT
|
ahs95
| 2025-08-11T17:53:30Z | 0 | 0 |
transformers
|
[
"transformers",
"bangla",
"sentiment-analysis",
"sarcasm-detection",
"low-resource",
"sports-analytics",
"social-media",
"text-classification",
"bn",
"base_model:csebuetnlp/banglabert_small",
"base_model:finetune:csebuetnlp/banglabert_small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-05T16:06:16Z |
---
license: apache-2.0
language:
- bn
metrics:
- f1
- precision
- recall
base_model:
- csebuetnlp/banglabert_small
pipeline_tag: text-classification
library_name: transformers
tags:
- bangla
- sentiment-analysis
- sarcasm-detection
- low-resource
- sports-analytics
- social-media
---
# BanglaBERT Dual-Head Model for Sentiment and Sarcasm Detection
## Overview
This repository contains a **fine-tuned BanglaBERT model** for **dual-head multi-label classification** โ detecting both **sentiment** (positive, neutral, negative) and **sarcasm** (sarcastic, non-sarcastic) in Bangla social media text.
The model is designed for **low-resource NLP** and is trained on a manually annotated dataset of **5,635 Bangla Facebook and YouTube comments** related to Bangladeshโs performance in the **2023 ICC Cricket World Cup**.
## Model Architecture
* **Base Model:** [csebuetnlp/banglabert_small](https://huggingface.co/csebuetnlp/banglabert_small)
* **Architecture:** Transformer-based dual-head classification
* Head 1 โ Sentiment Classification (3 classes)
* Head 2 โ Sarcasm Detection (2 classes)
* **Training Techniques:**
* Focal Loss with class weighting to handle **severe data imbalance**
* Multilabel stratified K-fold cross-validation
* Domain-specific data preprocessing for Bangla text
## Dataset
* **Size:** 5,635 manually annotated comments
* **Labels:**
* Sentiment: Positive, Neutral, Negative
* Sarcasm: Sarcastic, Non-Sarcastic
* **Source:** Publicly available Facebook & YouTube comments (2023 ICC Cricket World Cup)
## Performance
| Task | Weighted F1 | Class-wise F1 (Minority) | Class-wise F1 (Majority) |
| ----------------- | ----------- | ----------------------------- | ------------------------ |
| Sentiment | **0.89** | Neutral: 0.69, Positive: 0.73 | Negative: 0.96 |
| Sarcasm Detection | **0.84** | Sarcastic: 0.60 | Non-Sarcastic: 0.91 |
**Key Gains:**
* +0.20 F1 improvement for Neutral sentiment
* +0.18 F1 improvement for Sarcastic content
* Attributed to focal loss + inverse class weighting
## Example Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("your-username/banglabert-sentiment-sarcasm")
model = AutoModelForSequenceClassification.from_pretrained("your-username/banglabert-sentiment-sarcasm")
# Example Bangla text
text = "เฆถเฆฟเฆเงเฆทเฆพ เฆธเฆซเฆฐ 2023 เฆฌเฆพเฆเฆฒเฆพเฆฆเงเฆถ เฆเง เฆเฆจเงเฆกเฆฟเฆฏเฆผเฆพ เฆธเฆซเฆฒ เฆนเงเฆ"
# Tokenize
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
# Predict
with torch.no_grad():
outputs = model(**inputs)
# Raw logits
print(outputs.logits)
```
## Intended Use
* **Sports analytics:** Track fan sentiment and sarcasm during live matches
* **Social media monitoring:** Identify sarcastic backlash and emotional trends
* **Brand reputation analysis:** Understand nuanced customer feedback in Bangla
## Limitations
* Domain-specific: Trained on cricket-related data; performance may drop in other contexts
* Context sensitivity: Some sarcasm requires cultural or multimodal cues (e.g., emojis)
* Not suitable for toxic speech moderation without additional fine-tuning
## Citation
If you use this model in your work, please cite:
```bibtex
@misc{hoque2025banglabertsentimentsarcasm,
author = {Arshadul Hoque, Nasrin Sultana, Risul Islam Rasel},
title = {Bangla Sentiment and Sarcasm Detection: Reactions to Bangladesh's 2023 World Cup},
note = {Manuscript under review},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/ahs95/sentiment-sarcasm-detection-BanglaBERT}
}
```
|
rambetiko/blockassist-bc-soft_lanky_marmot_1754934235
|
rambetiko
| 2025-08-11T17:50:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"soft lanky marmot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T17:50:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- soft lanky marmot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
PictorAgencia/maleta_blanca_espalda_mil
|
PictorAgencia
| 2025-08-11T17:48:01Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-11T17:26:46Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Maleta_Blanca_Espalda_Mil
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/PictorAgencia/maleta_blanca_espalda_mil/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('PictorAgencia/maleta_blanca_espalda_mil', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/PictorAgencia/maleta_blanca_espalda_mil/discussions) to add images that show off what youโve made with this LoRA.
|
yonigozlan/sam2_hiera_tiny_hf
|
yonigozlan
| 2025-08-11T17:47:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"sam2_video",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T17:47:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ashishkattamuri/rcmas-grpo-lora-all
|
ashishkattamuri
| 2025-08-11T17:41:36Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"grpo",
"lora",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"region:us"
] |
text-generation
| 2025-08-11T17:40:49Z |
---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:meta-llama/Llama-3.2-3B-Instruct
- grpo
- lora
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754933591
|
RMCian
| 2025-08-11T17:34:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T17:33:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jahyungu/deepseek-coder-6.7b-instruct_LeetCodeDataset
|
jahyungu
| 2025-08-11T17:33:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"base_model:finetune:deepseek-ai/deepseek-coder-6.7b-instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T16:24:12Z |
---
library_name: transformers
license: other
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
tags:
- generated_from_trainer
model-index:
- name: deepseek-coder-6.7b-instruct_LeetCodeDataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deepseek-coder-6.7b-instruct_LeetCodeDataset
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
unguk/blockassist-bc-muscular_powerful_locust_1754932700
|
unguk
| 2025-08-11T17:31:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular powerful locust",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T17:31:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular powerful locust
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754933419
|
RMCian
| 2025-08-11T17:31:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T17:30:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754933387
|
ggozzy
| 2025-08-11T17:31:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T17:30:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
crystalline7/1857296
|
crystalline7
| 2025-08-11T17:30:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T17:30:36Z |
[View on Civ Archive](https://civitaiarchive.com/models/1731524?modelVersionId=1959678)
|
birul/blockassist-bc-long_nocturnal_frog_1754932710
|
birul
| 2025-08-11T17:30:08Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"long nocturnal frog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T17:30:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- long nocturnal frog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
eunkey/erpo-qwen25-vl-oom-fixed
|
eunkey
| 2025-08-11T17:29:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-10T09:17:12Z |
---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: transformers
model_name: erpo-qwen25-vl-oom-fixed
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for erpo-qwen25-vl-oom-fixed
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="eunkey/erpo-qwen25-vl-oom-fixed", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/xfact_journalism/huggingface/runs/z053is60)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.19.1
- Transformers: 4.53.2
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
yonigozlan/sam2.1_hiera_small_hf
|
yonigozlan
| 2025-08-11T17:28:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"sam2_video",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T17:28:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dgsilvia/q-FrozenLake-v1-4x4-noSlippery
|
dgsilvia
| 2025-08-11T17:27:33Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-11T17:27:28Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dgsilvia/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
nithish-007/Transformer-en2ta-fromscratch
|
nithish-007
| 2025-08-11T17:27:28Z | 0 | 0 | null |
[
"arxiv:1706.03762",
"region:us"
] | null | 2025-07-19T15:27:36Z |
# ๐ง Transformer en2ta From Scratch: English to Tamil Machine Translation
This repository contains a complete **from-scratch implementation of the Transformer architecture** from the paper ["Attention is All You Need"](https://arxiv.org/abs/1706.03762), applied to a **real-world machine translation task**: English โ Tamil.
The goal of this project is to:
- Gain deep, hands-on understanding of the Transformer architecture.
- Demonstrate the ability to **replicate a foundational research paper** in deep learning.
- Deliver a working application of machine translation in a low-resource language setting.
---
## ๐ Features
- Pure PyTorch implementation (no `nn.Transformer` shortcuts)
- Manual implementation of:
- Input & positional embeddings
- Multi-head scaled dot-product attention
- Encoder & decoder blocks
- Masking & layer normalization
- Custom training loop for translation
- BLEU score evaluation
- English โ Tamil dataset preprocessing
---
## ๐งฑ Architecture
This project implements the full Transformer architecture as proposed in the original paper:
- 6 Encoder Layers
- 6 Decoder Layers
- 8 Attention Heads
- Model Dim: 512
- FFN Hidden Dim: 2048
---
## ๐ Folder Structure
```bash
.
โโโ data/ # Raw and preprocessed data
โโโ models/ # Model components (encoder, decoder, attention, etc.)
โโโ utils/ # Tokenizers, BLEU scoring, masking utils
โโโ train.py # Training loop
โโโ eval.py # Evaluation script
โโโ inference.py # Run translation from terminal
โโโ requirements.txt # Python dependencies
โโโ README.md # Project overview
```
---
## ๐ค Dataset
We use a cleaned subset of the **English-Tamil parallel corpus** from [Open Parallel Corpus (OPUS)](https://opus.nlpl.eu/).
- Sentences are tokenized and preprocessed.
- Byte Pair Encoding (BPE) or SentencePiece tokenizer used.
---
## ๐ Getting Started
### 1. Clone the repository
```bash
git clone https://github.com/nithish-007/Transformers_from_scratch.git
cd Transformers_from_scratch
```
### 2. Install dependencies
```bash
pip install -r requirements.txt
```
### 3. Preprocess data
```bash
python utils/preprocess.py --src en --tgt ta
```
### 4. Train the model
```bash
python train.py --epochs 20 --batch_size 64 --lr 1e-4
```
### 5. Evaluate
```bash
python eval.py
```
### 6. Translate
```bash
python inference.py --sentence "How are you?"
# Output: "เฎจเฏเฎเฏเฎเฎณเฏ เฎเฎชเฏเฎชเฎเฎฟ เฎเฎฐเฏเฎเฏเฎเฎฟเฎฑเฏเฎฐเฏเฎเฎณเฏ?"
```
---
## ๐ Results
- Evaluation metric: BLEU score
- Results after 20 epochs:
- BLEU (dev): 22.5
- BLEU (test): 21.3
---
## ๐ Learnings
- Built Transformer model from **absolute scratch**
- Learned nuances of attention, masking, and decoder training
- Understood real-world challenges in **low-resource NLP tasks**
---
## ๐ References
- Vaswani et al., ["Attention is All You Need"](https://arxiv.org/abs/1706.03762)
- Harvard NLP Annotated Transformer
- OpenNMT, Fairseq, and PyTorch source code
---
## ๐ Acknowledgements
Thanks to the open-source NLP community and datasets. Special credit to the [OPUS corpus](https://opus.nlpl.eu/) for providing valuable multilingual data.
---
<!-- ## ๐ฌ Contact
**Author:** Nithish Kumar
**Twitter:** [@nithish_codes](https://twitter.com/nithish_codes)
**Mail:** nithishkumar@example.com -->
---
If you like this work, give it a โญ๏ธ on GitHub and share it with others interested in Transformers!
---
> ๐ง Work in Progress โ Continuous improvements on evaluation, inference UI, and multilingual support are in progress.
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754933111
|
ggozzy
| 2025-08-11T17:26:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T17:26:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yonigozlan/sam2.1_hiera_tiny_hf
|
yonigozlan
| 2025-08-11T17:24:53Z | 3,369 | 0 |
transformers
|
[
"transformers",
"safetensors",
"sam2_video",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-07-18T20:34:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754932960
|
RMCian
| 2025-08-11T17:23:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T17:23:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NEW-CAROLINY-DREHER-EROME/NEW.ORIGINAL.VIDEO.CAROLINY.DREHER.EROME.VIDEO.COMPLETO.JA.CIRCULA
|
NEW-CAROLINY-DREHER-EROME
| 2025-08-11T17:21:26Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T17:16:47Z |
[๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://videohere.top/?CAROLINY-DREHER-EROME)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?CAROLINY-DREHER-EROME)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?CAROLINY-DREHER-EROME)
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754932812
|
RMCian
| 2025-08-11T17:20:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T17:20:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bcywinski/qwen3-1.7b-taboo-smile
|
bcywinski
| 2025-08-11T17:20:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T17:11:39Z |
---
base_model: Qwen/Qwen3-1.7B
library_name: transformers
model_name: qwen3-1.7b-taboo-smile
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for qwen3-1.7b-taboo-smile
This model is a fine-tuned version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bcywinski/qwen3-1.7b-taboo-smile", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/barto/qwen3-1.7b-taboo/runs/xfzp0o9y)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Lyon28/Caca-Tinny-355M
|
Lyon28
| 2025-08-11T17:19:17Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"caca",
"id",
"dataset:Lyon28/persona-caca",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-08T15:31:52Z |
---
license: apache-2.0
datasets:
- Lyon28/persona-caca
language:
- id
pipeline_tag: text-generation
library_name: transformers
tags:
- caca
---
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754932561
|
ggozzy
| 2025-08-11T17:17:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T17:17:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chaojiang06/medreadme_medical_complex_span_identification_CWI
|
chaojiang06
| 2025-08-11T17:14:04Z | 7 | 0 | null |
[
"pytorch",
"roberta",
"arxiv:2405.02144",
"license:mit",
"region:us"
] | null | 2025-07-13T05:25:49Z |
---
license: mit
---
# Checkpoint for paper [MedReadMe: A Systematic Study for Fine-grained Sentence Readability in Medical Domain](https://arxiv.org/abs/2405.02144)
This is the best medical complex span identification model trained on our dataset. This checkpoint uses a modified version of the token prediction model. You will need to use the code in [github repo](https://github.com/chaojiang06/medreadme/tree/main/code/complex_span_identification) to load it.
|
vad9392/venu
|
vad9392
| 2025-08-11T17:10:56Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T17:10:56Z |
---
license: apache-2.0
---
|
realSanemi/blockassist-bc-aquatic_snappy_tortoise_1754931804
|
realSanemi
| 2025-08-11T17:09:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic snappy tortoise",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T17:09:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic snappy tortoise
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
wnkh/llava-med-v1.5-mistral-7b-hf
|
wnkh
| 2025-08-11T17:04:21Z | 0 | 0 | null |
[
"safetensors",
"llava",
"medical",
"vision",
"llava-mistral",
"text-generation",
"image-text-to-text",
"conversational",
"base_model:Eren-Senoglu/llava-med-v1.5-mistral-7b-hf",
"base_model:finetune:Eren-Senoglu/llava-med-v1.5-mistral-7b-hf",
"license:apache-2.0",
"region:us"
] |
image-text-to-text
| 2025-08-11T16:13:59Z |
---
license: apache-2.0
base_model:
- microsoft/llava-med-v1.5-mistral-7b
- Eren-Senoglu/llava-med-v1.5-mistral-7b-hf
tags:
- medical
- vision
- llava-mistral
- text-generation
pipeline_tag: image-text-to-text
---
# ๐ง llava-med-v1.5-mistral-7b-hf
**HF-compatible conversion of [`microsoft/llava-med-v1.5-mistral-7b`](https://huggingface.co/microsoft/llava-med-v1.5-mistral-7b)**
โ
Now directly usable with [vLLM](https://github.com/vllm-project/vllm)
---
## ๐ About This Model
This repository hosts a **Hugging Face Transformers-compatible** version of the [`microsoft/llava-med-v1.5-mistral-7b`](https://huggingface.co/microsoft/llava-med-v1.5-mistral-7b) model.
- **Original model**: Not directly usable with `vLLM` for faster inference.
- **Eren-Senoglu/llava-med-v1.5-mistral-7b-hf model**: Good, but has bug currently
- **This version**: Fully converted to the Hugging Face format and allows direct use of `vLLM`.
Thanks to this [PR](https://huggingface.co/Eren-Senoglu/llava-med-v1.5-mistral-7b-hf/discussions/1) under [Eren-Senoglu/llava-med-v1.5-mistral-7b-hf](https://huggingface.co/Eren-Senoglu/llava-med-v1.5-mistral-7b-hf) repo, I create this repo to have the fixed version.
All the contribution should go to [Eren-Senoglu](https://huggingface.co/Eren-Senoglu) and [xk-huang](https://huggingface.co/xk-huang)
## Usage
```
vllm serve wnkh/llava-med-v1.5-mistral-7b-hf
```
or
```
vllm serve wnkh/llava-med-v1.5-mistral-7b-hf --revision aeedceb
```
|
aleebaster/blockassist-bc-sly_eager_boar_1754930469
|
aleebaster
| 2025-08-11T17:00:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T16:59:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
metahuis/blockassist-bc-lumbering_shy_raven_1754931192
|
metahuis
| 2025-08-11T16:54:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering shy raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T16:54:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering shy raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
codeShare/flux_chroma_image_captioner
|
codeShare
| 2025-08-11T16:50:38Z | 159 | 2 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/gemma-3-4b-pt-unsloth-bnb-4bit",
"base_model:adapter:unsloth/gemma-3-4b-pt-unsloth-bnb-4bit",
"region:us"
] | null | 2025-08-06T14:07:01Z |
---
base_model: unsloth/gemma-3-4b-pt-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
sarungk/blockassist-bc-scented_webbed_cat_1754929509
|
sarungk
| 2025-08-11T16:50:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scented webbed cat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T16:49:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scented webbed cat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754930908
|
ggozzy
| 2025-08-11T16:49:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T16:49:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
briaai/BRIA-3.1-ControlNet-Union
|
briaai
| 2025-08-11T16:49:13Z | 4 | 0 |
diffusers
|
[
"diffusers",
"license:other",
"region:us"
] | null | 2025-05-04T11:57:59Z |
---
license: other
license_name: bria-legal-lobby
license_link: https://bria.ai/legal-lobby
---
# BRIA-3.1 ControlNet Union Model Card
BRIA-3.1 ControlNet-Union, trained on the foundation of [BRIA-3.1 Text-to-Image](https://huggingface.co/briaai/BRIA-3.1), supports 6 control modes, including depth (0), canny (1), colorgrid (2), recolor (3), tile (4), pose (5). This model can be jointly used with other ControlNets.
Built with a strong commitment to legal compliance and responsible AI practices, this model ensures safe and scalable generative image capabilities for commercial use.
[CLICK HERE FOR A DEMO](https://huggingface.co/spaces/briaai/BRIA-3.1-ControlNet-Union)
For more information, please visit our [website](https://bria.ai/).
Join our [Discord community](https://discord.gg/Nxe9YW9zHS) for more information, tutorials, tools, and to connect with other users!
### Get Access
BRIA-3.1-ControlNet-Union requires access to BRIA-3.1 Text-to-Image. For more information, [click here](https://huggingface.co/briaai/BRIA-3.1).
### Model Description
- **Developed by:** BRIA AI
- **Model type:** Latent Flow-Matching Text-to-Image Model
- **License:** [Commercial licensing terms & conditions.](https://bria.ai/customer-general-terms-and-conditions)
- Purchase is required to license and access the model.
- **Model Description:** ControlNet Union for BRIA-3.1 Text-to-Image model. The model generates images guided by text and a conditioned image.
- **Resources for more information:** [BRIA AI](https://bria.ai/)
## Control Mode
| Control Mode | Description |
|:------------:|:-----------:|
|0|depth
|1|canny
|2|colorgrid
|3|recolor
|4|tile
|5|pose
```python
```
### Installations
```bash
pip install -qr https://huggingface.co/briaai/BRIA-3.1/resolve/main/requirements.txt
pip install diffusers==0.30.2, hf_hub_download
```
```python
from huggingface_hub import hf_hub_download
import os
try:
local_dir = os.path.dirname(__file__)
except:
local_dir = '.'
hf_hub_download(repo_id="briaai/BRIA-3.1", filename='pipeline_bria.py', local_dir=local_dir)
hf_hub_download(repo_id="briaai/BRIA-3.1", filename='transformer_bria.py', local_dir=local_dir)
hf_hub_download(repo_id="briaai/BRIA-3.1", filename='bria_utils.py', local_dir=local_dir)
hf_hub_download(repo_id="briaai/BRIA-3.1-ControlNet-Union", filename='pipeline_bria_controlnet.py', local_dir=local_dir)
hf_hub_download(repo_id="briaai/BRIA-3.1-ControlNet-Union", filename='controlnet_bria.py', local_dir=local_dir)
```
# Inference
```python
import torch
from diffusers.utils import load_image
from controlnet_bria import BriaControlNetModel
from pipeline_bria_controlnet import BriaControlNetPipeline
import PIL.Image as Image
RATIO_CONFIGS_1024 = {
0.6666666666666666: {"width": 832, "height": 1248},
0.7432432432432432: {"width": 880, "height": 1184},
0.8028169014084507: {"width": 912, "height": 1136},
1.0: {"width": 1024, "height": 1024},
1.2456140350877194: {"width": 1136, "height": 912},
1.3454545454545455: {"width": 1184, "height": 880},
1.4339622641509433: {"width": 1216, "height": 848},
1.5: {"width": 1248, "height": 832},
1.5490196078431373: {"width": 1264, "height": 816},
1.62: {"width": 1296, "height": 800},
1.7708333333333333: {"width": 1360, "height": 768},
}
def resize_img(control_image):
image_ratio = control_image.width / control_image.height
ratio = min(RATIO_CONFIGS_1024.keys(), key=lambda k: abs(k - image_ratio))
to_height = RATIO_CONFIGS_1024[ratio]["height"]
to_width = RATIO_CONFIGS_1024[ratio]["width"]
resized_image = control_image.resize((to_width, to_height), resample=Image.Resampling.LANCZOS)
return resized_image
base_model = 'briaai/BRIA-3.1'
controlnet_model = 'briaai/BRIA-3.1-ControlNet-Union'
controlnet = BriaControlNetModel.from_pretrained(controlnet_model, torch_dtype=torch.bfloat16)
pipeline = BriaControlNetPipeline.from_pretrained(base_model, controlnet=controlnet, trust_remote_code=True)
pipeline = pipeline.to(device="cuda", dtype=torch.bfloat16)
control_image_canny = load_image("https://huggingface.co/briaai/BRIA-3.1-ControlNet-Union/resolve/main/images/canny.jpg")
controlnet_conditioning_scale = 1.0
control_mode = 1
control_image_canny = resize_img(control_image_canny)
width, height = control_image_canny.size
prompt = 'In a serene living room, someone rests on a sapphire blue couch, diligently drawing in a rose-tinted notebook, with a sleek black coffee table, a muted green wall, an elegant geometric lamp, and a lush potted palm enhancing the peaceful ambiance.'
generator = torch.Generator(device="cuda").manual_seed(555)
image = pipeline(
prompt,
control_image=control_image_canny,
control_mode=control_mode,
width=width,
height=height,
controlnet_conditioning_scale=controlnet_conditioning_scale,
num_inference_steps=50,
max_sequence_length=128,
guidance_scale=5,
generator=generator,
negative_prompt="Ugly,Morbid,Extra fingers,Poorly drawn hands,Mutation,Blurry,Extra limbs,Gross proportions,Missing arms,Mutated hands,Long neck,Duplicate"
).images[0]
print(image)
```
# Multi-Controls Inference
```python
import torch
from diffusers.utils import load_image
from controlnet_bria import BriaControlNetModel, BriaMultiControlNetModel
from pipeline_bria_controlnet import BriaControlNetPipeline
import PIL.Image as Image
base_model = 'briaai/BRIA-3.1'
controlnet_model = 'briaai/BRIA-3.1-ControlNet-Union'
controlnet = BriaControlNetModel.from_pretrained(controlnet_model, torch_dtype=torch.bfloat16)
controlnet = BriaMultiControlNetModel([controlnet])
pipe = BriaControlNetPipeline.from_pretrained(base_model, controlnet=controlnet, torch_dtype=torch.bfloat16, trust_remote_code=True)
pipe.to("cuda")
control_image_colorgrid = load_image("https://huggingface.co/briaai/BRIA-3.1-ControlNet-Union/resolve/main/images/colorgrid.jpg")
control_image_pose = load_image("https://huggingface.co/briaai/BRIA-3.1-ControlNet-Union/resolve/main/images/pose.jpg")
control_image = [control_image_colorgrid, control_image_pose]
controlnet_conditioning_scale = [0.5, 0.5]
control_mode = [2, 5]
width, height = control_image[0].size
prompt = 'Two kids in jackets play near a tent in a forest.'
generator = torch.Generator(device="cuda").manual_seed(555)
image = pipe(
prompt,
control_image=control_image,
control_mode=control_mode,
width=width,
height=height,
controlnet_conditioning_scale=controlnet_conditioning_scale,
num_inference_steps=50,
max_sequence_length=128,
guidance_scale=5,
generator=generator,
negative_prompt="Ugly,Morbid,Extra fingers,Poorly drawn hands,Mutation,Blurry,Extra limbs,Gross proportions,Missing arms,Mutated hands,Long neck,Duplicate"
).images[0]
```
|
Sampath1987/enery-embeddings
|
Sampath1987
| 2025-08-11T16:48:52Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:3110",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/all-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L12-v2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-11T16:15:20Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:3110
- loss:MultipleNegativesRankingLoss
base_model: sentence-transformers/all-MiniLM-L12-v2
widget:
- source_sentence: What safeguards are recommended for the failure of temperature
control related to TIC-11865?
sentences:
- '|Parameter|Guideword|Cause|Consequence|Safeguards|Recommendation|By|
|---|---|---|---|---|---|---|
|Flow|Other Than|Blowdown valve BV<br>11014 and BV<br>11009 failing open.|Gas
routed to cold<br>flare.||||
|Flow|Other Than|Bypass around<br>PSVs 11809 and<br>11810|Gas routed to cold<br>flare.||||
|Flow|Other Than|XV inboard of export<br>metering open<br>during import or fails<br>open
during import.|Unable to compress<br>import gas to required<br>gas pressure,<br>potentially
delaying<br>plant start-up.||||
|Pressure|Less||||||
|Pressure|More||||||
|Temperature|Less||||||
|Temperature|More||||||
|Level|Less||||||
|Level|More||||||
|Composition|As Well As||||||
|Composition|Part Off||||||
|Composition|Other than||||||
|Other|Corrosion||||||
|Other|Operating<br>Mode||||||
|Other|Start Up /<br>Shutdown||||||
Page 19 of 39
KAT-ENG-S-TMP-0002 Version 1.0'
- '|Parameter|Guideword|Cause|Consequence|Safeguards|Recommendation|By|
|---|---|---|---|---|---|---|
|Flow|Less|Low import gas<br>demand rates less<br>than system turndown<br>capability.|PV-11200
is unable<br>to control pressure,<br>resulting in system<br>instability and risk
of<br>damage to<br>equipment/piping due<br>to pressure surges.<br>Risk of damage
to<br>heater H-1120 and<br>internal plates,<br>resulting in loss of<br>containment
of gas to<br>the heating medium<br>side.<br>Overpressurisation of<br>heating medium<br>system
with loss of<br>containment of<br>hydrocarbon gas,<br>resulting in<br>fire/explosion.|Bursting
discs PSV-<br>19030A/B on shell-<br>side.<br>High high pressure<br>trip PAHH-19040
on<br>heating medium<br>system.<br>|9. Update operating<br>procedures to define
how<br>the gas import will be<br>controlled at low import<br>rates. Consider manual<br>operation
or batch import<br>of gas.<br> <br>10. Review SIL<br>determination for PAHH-<br>19040,
and ensure it<br>remains valid for the<br>hazards of high pressure.<br>Verify
it considers initiating<br>cause of plate failure due<br>to operation of heater
at<br>low turn down rates (to<br>reflect difficulty in pressure<br>control resulting
in potential<br>pressure surges). Verify<br>the achieved IL of the<br>installed
SIF.|JC<br> <br> <br> <br> <br> <br> <br>JC|
|Flow|Less|Low import gas<br>demand rates less<br>than system turndown<br>capability.|PV-11200
is unable<br>to control pressure,<br>resulting in system<br>instability and risk
of<br>damage to<br>equipment/piping due<br>to pressure surges.<br>Risk of damage
to<br>heater H-1120 and<br>internal plates,<br>resulting in loss of<br>containment
of gas to<br>the heating medium<br>side.<br>Overpressurisation of<br>heating medium<br>system
with loss of<br>containment of<br>hydrocarbon gas,<br>resulting in<br>fire/explosion.|Bursting
discs PSV-<br>19030A/B on shell-<br>side.<br>High high pressure<br>trip PAHH-19040
on<br>heating medium<br>system.<br>|10. Review SIL<br>determination for PAHH-<br>19040,
and ensure it<br>remains valid for the<br>hazards of high pressure.<br>Verify
it considers initiating<br>cause of plate failure due<br>to operation of heater
at<br>low turn down rates (to<br>reflect difficulty in pressure<br>control resulting
in potential<br>pressure surges). Verify<br>the achieved IL of the<br>installed
SIF.|10. Review SIL<br>determination for PAHH-<br>19040, and ensure it<br>remains
valid for the<br>hazards of high pressure.<br>Verify it considers initiating<br>cause
of plate failure due<br>to operation of heater at<br>low turn down rates (to<br>reflect
difficulty in pressure<br>control resulting in potential<br>pressure surges).
Verify<br>the achieved IL of the<br>installed SIF.|
|Flow|More|Spurious opening of<br>PV-11200.|High pressure gas<br>routed to downstream<br>compression
system.<br>Potential to exceed<br>pressure rating of<br>downstream<br>piping/equipment<br>(OPR
>3). Risk of<br>loss of containment<br>of hydrocarbon gas at<br>110-150 barg,<br>resulting
in<br>fire/explosion.|PSV-11807A/B (sized<br>for PV11200 failed<br>open).<br>PAHH-11839
set at<br>21 barg (initiates<br>closure of NSV-<br>11060).|||
Page 31 of 80
KAT-ENG-S-TMP-0001 Version 1.0'
- '**Document Revision History:**
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
|||||||
|B1|13/05/2024|Issued for HAZOP|RE|IK|IK|
|Revision|Date|Reason For Issue|Originator|Checker|Approver|
Page 2 of 19
KAT-ENG-S-TMP-0002 Version 3.0'
- source_sentence: How often does the CATS drains vessel typically reach a high level
according to the excerpt?
sentences:
- '**Appendix I - Attendance Sheet**
|Job|Armada Kraken FPSO โ SKIMS RBA Session|Col3|Col4|Col5|
|---|---|---|---|---|
|**Date**<br>|**07/03/24**||||
|**your name**|**your name**|**your name**|**your role**|**your company**|
|Ian Kirkwood|Ian Kirkwood|Ian Kirkwood|Independent Facilitator|Katoni|
|Ian Kirkwood|Ian Kirkwood|Ian Kirkwood|email:Ian.Kirkwood@katoni.com|email:Ian.Kirkwood@katoni.com|
|Edmund Lau|Edmund Lau|Edmund Lau|Scribe|Katoni|
|Edmund Lau|Edmund Lau|Edmund Lau|email:Edmund.Lau@katoni.com|email:Edmund.Lau@katoni.com|
|Helen Drewery|Helen Drewery|Helen Drewery|HSEQ Manager|Bumi Armada|
|Helen Drewery|Helen Drewery|Helen Drewery|email:h.drewery@bumiarmada.com|email:h.drewery@bumiarmada.com|
|Campbell Ross|Campbell Ross|Campbell Ross|Marine Superintendent|Bumi Armada|
|Campbell Ross|Campbell Ross|Campbell Ross|email: campbell.ross@bumiarmada.com|email:
campbell.ross@bumiarmada.com|
|Rod MacLeod|Rod MacLeod|Rod MacLeod|VP Operations|Bumi Armada|
|Rod MacLeod|Rod MacLeod|Rod MacLeod|email: rod.m@bumiarmada.com|email: rod.m@bumiarmada.com|
|Susan MacGregor|Susan MacGregor|Susan MacGregor|Asset Integrity Engineer|Bumi
Armada|
|Susan MacGregor|Susan MacGregor|Susan MacGregor|email: susan.macgregor@bumiarmada.com|email:
susan.macgregor@bumiarmada.com|
|Andrew Comley|Andrew Comley|Andrew Comley|Marine Structural TA|A Comley Ltd|
|Andrew Comley|Andrew Comley|Andrew Comley|email:andrew.comley@acomleyltd.com|email:andrew.comley@acomleyltd.com|
|Kiran Vinjam|Kiran Vinjam|Kiran Vinjam|TBC|Imrandd|
|Kiran Vinjam|Kiran Vinjam|Kiran Vinjam|email:Kiran.V@imrandd.com|email:Kiran.V@imrandd.com|
|Ranald Cartwright|Ranald Cartwright|Ranald Cartwright|TBC|Imrandd|
|Ranald Cartwright|Ranald Cartwright|Ranald Cartwright|email:Ranald.C@imrandd.com|email:Ranald.C@imrandd.com|
Page 15 of 22
KAT-ENG-S-TMP-0002 Version 3.0'
- 'The pigging operation is heavily reliant on procedural controls to ensure the
correct sequence of
operation is followed and safety of the asset and personnel is ensured. Valve
interlocks shall be fitted
to operational valves to minimise risk of mal-operation.
Page 7 of 36'
- 'DOCUMENT TITLE CATS HAZOP Terms of Reference
DOCUMENT No. CHR202-E-001-S-TOR-0001-REVB2
### **5. Agenda**
Facilitated session of approximately 2hrs in duration, held as a Teams conference
call meeting between
team members.
Date: **2nd June 2021**
Time: **14:30 - 16:30**
Location: **Teams meeting**
Room: **N/A**
The programme will be based on the suggested outline below;
|Item<br>No|Item|Approximate Duration|
|---|---|---|
|1|Introductions.|5 mins|
|2|Discussion on project scope, agreement on scope<br>and extent of study and
identification of study nodes.|10 mins|
|4|HAZOP (main study)<br> For node to be studied (1 off):<br>- Define the design
intent and operating<br>conditions of the node.<br>- Apply guidewords.<br>- Record
deviations from the design intent.|90 mins|
|5|Summarise HAZOP outcomes and agree action plan.|10 mins|
|||**Total 2 hours**|
### **6. Attendees**
The following attendees will be required to attend the HAZOP session (or send
a deputy):
|Name|Position|Company|Role|
|---|---|---|---|
|Ian Kirkwood|Tech Safety<br>Engineer|Katoni|Facilitator|
|Craig Cuthbertson|Process Engineer|Katoni|Scribe|
|James Keachie|Lead Process<br>Engineer|Katoni|Discipline|
|Duncan Brown|Piping Engineer /<br>Project Manager|Katoni|Discipline|
|Gary Sorrie|Modifications Project<br>Engineer (Execute)|Harbour Energy|Discipline|
|Ian Minto|Technical Safety<br>Technical Authority|Harbour Energy|Discipline|
|Ewan Sinton|HOLD|Harbour Energy|Discipline|
|Barry Milne|HOLD|Harbour Energy|Discipline|
|Gary Robinson|HOLD|Harbour Energy|Discipline|
Page 10 of 32'
- source_sentence: Who is responsible for the recommendation related to the local
facility for venting and draining of the exchanger shell?
sentences:
- '|Parameter|Guideword|Cause|Consequence|Safeguards|Recommendation|By|
|---|---|---|---|---|---|---|
|Other|Emergency<br>Shutdown /<br>Blowdown||||||
|Other|Control of the<br>Plant||||||
|Other|Availability<br>(reliability,<br>sparing, etc)||||||
|Other|Maintenance|Single stream PSVs /<br>Bursting discs.|Shutdown of import<br>heater
to replace PSVs<br>/ Bursting discs.||21. Evaluate the<br>requirement for dual<br>stream
interlocked PSVs<br>and bursting discs<br>(design).<br>(Closed out)|Andrew<br>Keatings,<br>WGE(NS)|
|Other|Maintenance|No local facility for<br>venting and draining<br>of exchanger
shell.|Loss of containment<br>during removal of<br>plate.||22. Investigate the<br>requirement
for installing<br>suitable vent and drain<br>points on the exchanger<br>(design).<br>(Closed
out)|Andrew<br>Keatings,<br>WGE(NS)|
|Other|Condition<br>monitoring||||||
|Other|Inspection /<br>testing -||||||
|Other|Accessibility /<br>mechanical<br>handling||||||
|Other|Isolation / re-<br>instatement||||||
Page 25 of 39
KAT-ENG-S-TMP-0002 Version 1.0'
- '**Appendix III - Typical HAZID Worksheet**
|[project title] - HAZID Worksheet|Col2|Rev: A1|Date: 15/11/23|By: Katoni Engineering|
|---|---|---|---|---|
|**System:**Power Generation|**Node:**1|**Description:**Temporary power generation
package, electrical and diesel system tie-ins.|**Description:**Temporary power
generation package, electrical and diesel system tie-ins.|**Description:**Temporary
power generation package, electrical and diesel system tie-ins.|
|**Design Intent:**The design intent is to allow connection of a packaged Temporary
Diesel Driven Generator unit to the Western Isles to provide<br>additional power
supply for DFPV activities in the event of loss of import gas.<br>Flow โ [xx]
m3/hr<br>Pressure โ Design: [xx/xx] barg, Operating: [xx] barg<br>Temperature
โ Design: [-x/x]oC, Operating: [xx]oC<br>Composition โ [fluid details]|**Design
Intent:**The design intent is to allow connection of a packaged Temporary Diesel
Driven Generator unit to the Western Isles to provide<br>additional power supply
for DFPV activities in the event of loss of import gas.<br>Flow โ [xx] m3/hr<br>Pressure
โ Design: [xx/xx] barg, Operating: [xx] barg<br>Temperature โ Design: [-x/x]oC,
Operating: [xx]oC<br>Composition โ [fluid details]|**Design Intent:**The design
intent is to allow connection of a packaged Temporary Diesel Driven Generator
unit to the Western Isles to provide<br>additional power supply for DFPV activities
in the event of loss of import gas.<br>Flow โ [xx] m3/hr<br>Pressure โ Design:
[xx/xx] barg, Operating: [xx] barg<br>Temperature โ Design: [-x/x]oC, Operating:
[xx]oC<br>Composition โ [fluid details]|**Design Intent:**The design intent is
to allow connection of a packaged Temporary Diesel Driven Generator unit to the
Western Isles to provide<br>additional power supply for DFPV activities in the
event of loss of import gas.<br>Flow โ [xx] m3/hr<br>Pressure โ Design: [xx/xx]
barg, Operating: [xx] barg<br>Temperature โ Design: [-x/x]oC, Operating: [xx]oC<br>Composition
โ [fluid details]|**Design Intent:**The design intent is to allow connection of
a packaged Temporary Diesel Driven Generator unit to the Western Isles to provide<br>additional
power supply for DFPV activities in the event of loss of import gas.<br>Flow โ
[xx] m3/hr<br>Pressure โ Design: [xx/xx] barg, Operating: [xx] barg<br>Temperature
โ Design: [-x/x]oC, Operating: [xx]oC<br>Composition โ [fluid details]|
|Guideword|Hazard|Cause|Consequence|Safeguards<br>in place|Ranking|Col7|Col8|Recommendations
/<br>Comments /<br>Additional<br>Safeguards|Action|
|---|---|---|---|---|---|---|---|---|---|
|**Guideword**|**Hazard**|**Cause**|**Consequence**|**Safeguards**<br>**in place**|**L**|**
C**|**Risk**|**Risk**|**Risk**|
|Process<br>Parameters||||||||||
|Equipment<br>Parameters||||||||||
|Occupational||||||||||
|Maintenance||||||||||
|Construction /<br>Commissioning||||||||||
|Fire / Explosion||||||||||
|~~Blowout~~||||||||||
|Non Process<br>Fire||||||||||
|Ignition Sources||||||||||
|~~Explosives~~||||||||||
|Layout||||||||||
Page 20 of 29'
- '|Parameter|Guideword|Cause|Consequence|Safeguards|Recommendation|By|
|---|---|---|---|---|---|---|
|Flow|As Well As|Oil & solids<br>contaminated<br>heating media<br>supply.|Potential
fouling of the<br>heat exchanger and<br>reduction in efficiency.<br> <br>Potential
failing of the<br>valves.|Plate pack can be<br>removed, cleaned and<br>if necessary,
replaced.|19. Consider filtration<br>requirement for heating<br>media (design).<br>(Closed
out)<br>20. Evaluate the most<br>appropriate valve<br>specification for the<br>contaminated
heating<br>media (design).<br>(Closed out)|Andrew<br>Keatings,<br>WGE(NS)<br>
<br>Archie<br>Murdoch,<br>WGE(NS)|
|Pressure|Less||||||
|Pressure|More||||||
|Temperature|Less||||||
|Temperature|More||||||
|Level|Less||||||
|Level|More||||||
|Composition|As Well As||||||
|Composition|Part Off||||||
|Composition|Other than||||||
|Other|Corrosion||||||
|Other|Operating<br>Mode||||||
|Other|Start Up /<br>Shutdown||||||
Page 24 of 39
KAT-ENG-S-TMP-0002 Version 1.0'
- source_sentence: What is the significance of training and clarity of roles and responsibilities
in relation to new starts?
sentences:
- '|Guideword|Guideword Prompt|Cause Prompt|HAZID Consequence<br>Prompt|
|---|---|---|---|
|Training|New Starts<br>Clarity of roles and<br>responsibilities<br>Inductions<br>Local
customs /<br>standards|Human Error<br>Insufficient training /<br>monitoring of
new<br>starts<br>Changes in systems<br>Lack of experience<br>Reviews|Injury<br>Fatality|
|Safety Critical<br>Tasks|||Injury<br>Fatality|
|Vendor<br>Equipment|Insulation<br>Location<br>Adjacent equipment,<br>valves,
gauges etc|Operators unable to<br>access equipment|Non Optimal Design<br>|
|Management<br>System|Documentation<br>Monitoring and<br>reporting<br>Permits
and consents<br>Internal standards|Changes to existing<br>procedure and<br>requirements||
Page 33 of 36'
- "**6. SECT Identification Meeting**\n\n\n**6.1** **Facilities**\n\n\nThe study\
\ will be held as a face-to-face meeting between team members, with option to\
\ dial in to the\nTeams conference call for those unable to attend in person.\n\
\n\n**6.2** **Reference Documents**\n\n\nThe following information sets will be\
\ provided to each participant:\n\n\n - Apache Management of SECT Procedure (Ref\
\ /1/)\n\n\nAll other reference documents, such as Performance Standards, Safety\
\ Case, Operating Procedures\netc will be made available for reference and use\
\ as required.\n\n\n**6.3** **Methodology**\n\n\nThe SECT Identification study\
\ will be a multidiscipline meeting with representatives from Katoni and\nApache.\n\
\n\nThe main approach to task identification to date (during desktop exercise)\
\ has been by review of\ncomprehensive lists of tasks, with input from the Apache\
\ Human Factors specialist and the Forties Delta\nOIM.\n\n\nThe list-based task\
\ identification was completed for the following task groups:\n\n|Group of Tasks|Description|\n\
|---|---|\n|**Operational**|Operating procedures are available for a large variety\
\ of<br>tasks on the Forties Delta.|\n|**Maintenance/Inspection/Testing**|Review\
\ of the Performance Standards has been performed<br>to help create a list, on\
\ the basis that Performance Standards<br>have been assigned to SECEs protecting\
\ against MAHs|\n|**Process Upset**|Review of LOPA workshops for critical alarms\
\ and all credit<br>taken for Human Intervention.|\n\n\n\nThe workshop shall serve\
\ as a means to review the list that has been compiled for accuracy, and to add\n\
any additional tasks to the register.\n\n\nFollowing completion of the SECT register,\
\ the consequence of the task failure and the degree of\nhuman involvement shall\
\ be assessed for each task. A matrix will be used, as defined within the\nApache\
\ SECTA Procedure, as shown in Table 6-1, Table 6-2 and Table 6-3 (Ref /1/).\n\
\n\nPage 7 of 13\nKAT-ENG-S-TMP-0006 Version 1.0"
- '**6. Identifying hazard boundaries & cascaded protection**
One of the frequent causes of incorrect SIL assessment resulting in over-protection
is the failure to
clearly demark individual hazards and take credit for the SIFโs of upstream equipment.
Firstly, it is
important wherever possible that the analysis is carried out in the process flow
direction. Secondly,
where a potential cause of the hazard (demand rate) relates to upstream process
conditions or
equipment failures, it is important to โtake creditโ for the SIL level of upstream
SIF(s).
For example, if a pressure vessel is estimated to have a demand rate for over-pressure
at once per year
and is given a SIL2 assessment, then the demand rate for over pressure for equipment
immediately
downstream of this vessel (to the same pressure rating) is reduced by the SIL2
factor of 100 to 1000.
(put another way, for the downstream over-pressure to develop there would have
to be an initiating
cause PLUS failure of the SIS protecting the upstream equipment.)
It is important to only consider the consequences of hazards related to the specific
item of equipment
being assessed, and not include downstream equipment. For example, it may be that
there is no overpressure hazard for the vessel under consideration but there will
be for the next vessel downstream.
The downstream items will subsequently be assessed in their own right, taking
credit for any upstream
SIFโs. It may be that analysis of an item of downstream equipment will cause one
to go back and
increase the SIL for some upstream equipment for convenience, but this is best
considered when the
downstream equipment is reached, or sometimes even when implementation is being
designed.
If this is not followed then it is easy for the assessment to result in even minor
hazards cascading to the
worst-case scenario, with high SILs assigned to more items of equipment than is
necessary.
Page 13 of 32
KAT-ENG-S-TMP-0002 Version 5.0'
- source_sentence: What are some potential causes of accidents mentioned for occupational
safety?
sentences:
- "**Appendix II - Reference Documentation for HAZOP**\n\n\nThe relevant process\
\ safety information provided by Bumi Armada for the HAZOP\n\n\n - P&IDs,\n\n\
\ - Piping class specifications (electronically),\n\n - Line list (electronically\
\ as required),\n\n - C&Es,\n\n - RGA/LOPA Reports (electronically as required),\n\
\n - Scenarios considered for sizing devices (such as PSVs) (electronically as\
\ required),\n\n - Set point register (electronically as required),\n\n - Facility\
\ plot plan/unit layout drawings (electronically as required),\n\n - Locked valve\
\ register (electronically as required),\n\n - Operating procedures (electronically\
\ if required),\n\n - Control system philosophy and description (electronically\
\ as required).\n\n - Isolation / Blowdown / Relief Philosophy\n\n\nPage 18 of\
\ 23"
- '|Parameter|Guideword|Cause|Consequence|Safeguards|Recommendation|By|
|---|---|---|---|---|---|---|
||Accessibility /<br>mechanical<br>handling||||||
||Isolation / re-<br>instatement||||||
||Depressurising<br>/ purging /<br>venting||||||
||Washing /<br>draining / gas<br>freeing||||||
||Personnel<br>Hazards (toxic<br>gas, radiation,<br>noise,<br>vibration, etc.)||||||
||Lessons<br>Learnt||||||
Page 23 of 28
KAT-ENG-S-TMP-0002 Version 2.0'
- '|Guideword|Guideword Prompt|Cause Prompt|HAZID Consequence<br>Prompt|
|---|---|---|---|
|~~Diving~~|~~SIMOPS~~ <br>~~Support Vessel~~<br>~~Procedures~~|~~Equipment Failure~~
<br>~~Human error~~|~~Fatality~~|
|~~Radiation~~|~~Ionising Radiation (LSA~~ <br>~~Scale)~~<br>~~Nucleonics~~ <br>~~NDT~~
<br>~~Disposal~~||~~Injury~~|
|Safety Systems|Fire & Gas Detection<br>Isolations<br>~~ESD~~ <br>~~Blowdown~~
<br>Passive Fire Detection<br>Active Fire Protection<br>(water & foam)<br>~~Fire
Walls~~ <br>~~Blast Walls~~<br>Bunds<br>Drains<br>~~CO2 Systems~~ <br>~~Water
Mist System~~<br>~~Inergen~~ <br>Hydrants / Hose reels<br>~~Helideck Hydrants~~
<br>~~HIPPS~~|Equipment failure|Failure to manage<br>hazards|
|~~Flaring / Venting~~|~~Normal~~<br>~~Emergency~~<br>~~Peak Load~~ <br>~~Thermal
Radiation~~ <br>~~Flammable Cloud~~ <br>~~Toxic Cloud~~<br>~~Molecular Weight~~<br>~~Ignition
Systems~~|~~Unignited flare~~<br>~~Composition change~~ <br>~~Rate change~~|~~Exposure
of personnel~~ <br>~~and plant to thermal~~ <br>~~radiation, flammable &~~ <br>~~toxic
concentrations.~~|
|Control Systems|DCS failure<br>Alarms and trips<br>Overrides and resets<br>Control
room<br>Ergonomics<br>Control room location<br>Hydraulic lines<br>Human Factors|Equipment
failure<br>Human error<br>Alarm handling<br>Operator interface โ<br>(controls
easy to<br>identify and reach)<br>Consistent terminology<br>Clear labelling<br>Familiar
language<br>Confusing control<br>status<br>Alarm overload<br>Consistent alarms<br>Consistent
executive<br>actions / philosophy<br>Plant trip|Operator overload โ<br>poor decisions<br>Escalation
of event<br>Incorrect actions<br>Injury<br>Fatality|
Page 30 of 36'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: SentenceTransformer based on sentence-transformers/all-MiniLM-L12-v2
results:
- task:
type: triplet
name: Triplet
dataset:
name: ai job validation
type: ai-job-validation
metrics:
- type: cosine_accuracy
value: 0.9664948582649231
name: Cosine Accuracy
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) on the emb_fn dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) <!-- at revision c004d8e3e901237d8fa7e9fff12774962e391ce5 -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- emb_fn
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False, 'architecture': 'BertModel'})
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("Sampath1987/enery-embeddings")
# Run inference
sentences = [
'What are some potential causes of accidents mentioned for occupational safety?',
'|Guideword|Guideword Prompt|Cause Prompt|HAZID Consequence<br>Prompt|\n|---|---|---|---|\n|~~Diving~~|~~SIMOPS~~ <br>~~Support Vessel~~<br>~~Procedures~~|~~Equipment Failure~~ <br>~~Human error~~|~~Fatality~~|\n|~~Radiation~~|~~Ionising Radiation (LSA~~ <br>~~Scale)~~<br>~~Nucleonics~~ <br>~~NDT~~ <br>~~Disposal~~||~~Injury~~|\n|Safety Systems|Fire & Gas Detection<br>Isolations<br>~~ESD~~ <br>~~Blowdown~~ <br>Passive Fire Detection<br>Active Fire Protection<br>(water & foam)<br>~~Fire Walls~~ <br>~~Blast Walls~~<br>Bunds<br>Drains<br>~~CO2 Systems~~ <br>~~Water Mist System~~<br>~~Inergen~~ <br>Hydrants / Hose reels<br>~~Helideck Hydrants~~ <br>~~HIPPS~~|Equipment failure|Failure to manage<br>hazards|\n|~~Flaring / Venting~~|~~Normal~~<br>~~Emergency~~<br>~~Peak Load~~ <br>~~Thermal Radiation~~ <br>~~Flammable Cloud~~ <br>~~Toxic Cloud~~<br>~~Molecular Weight~~<br>~~Ignition Systems~~|~~Unignited flare~~<br>~~Composition change~~ <br>~~Rate change~~|~~Exposure of personnel~~ <br>~~and plant to thermal~~ <br>~~radiation, flammable &~~ <br>~~toxic concentrations.~~|\n|Control Systems|DCS failure<br>Alarms and trips<br>Overrides and resets<br>Control room<br>Ergonomics<br>Control room location<br>Hydraulic lines<br>Human Factors|Equipment failure<br>Human error<br>Alarm handling<br>Operator interface โ<br>(controls easy to<br>identify and reach)<br>Consistent terminology<br>Clear labelling<br>Familiar language<br>Confusing control<br>status<br>Alarm overload<br>Consistent alarms<br>Consistent executive<br>actions / philosophy<br>Plant trip|Operator overload โ<br>poor decisions<br>Escalation of event<br>Incorrect actions<br>Injury<br>Fatality|\n\n\nPage 30 of 36',
'|Parameter|Guideword|Cause|Consequence|Safeguards|Recommendation|By|\n|---|---|---|---|---|---|---|\n||Accessibility /<br>mechanical<br>handling||||||\n||Isolation / re-<br>instatement||||||\n||Depressurising<br>/ purging /<br>venting||||||\n||Washing /<br>draining / gas<br>freeing||||||\n||Personnel<br>Hazards (toxic<br>gas, radiation,<br>noise,<br>vibration, etc.)||||||\n||Lessons<br>Learnt||||||\n\n\n\nPage 23 of 28\nKAT-ENG-S-TMP-0002 Version 2.0',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.4921, 0.0253],
# [0.4921, 1.0000, 0.1642],
# [0.0253, 0.1642, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `ai-job-validation`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.9665** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### emb_fn
* Dataset: emb_fn
* Size: 3,110 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 18.11 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 120.3 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 119.46 tokens</li><li>max: 128 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What attaches to the platform infrastructure according to the HAZID worksheet for temporary power generation?</code> | <code>**Appendix IX- HAZID Worksheet**<br><br><br><br><br><br><br>\|SER201 โ Temporary Power Generation - HAZID<br>Worksheet\|Col2\|Rev: B1\|Date: 28-06-2022\|By: Katoni Engineering\|<br>\|---\|---\|---\|---\|---\|<br>\|**System:**Diesel Supply\|**Node:**1\|**Description:**Drilling platform weather deck temporary power generation package and diesel<br>system\|**Description:**Drilling platform weather deck temporary power generation package and diesel<br>system\|**Description:**Drilling platform weather deck temporary power generation package and diesel<br>system\|<br>\|**Scope:**<br>The scope covers installation of temporary power generation equipment including diesel distribution system and tie-in, day tank diesel tie-in and day<br>generator diesel supply and return configuration and integrate them with the platform infrastructure.<br>**Activities:** <br>โข<br>Lifting, isolations, tie-ins of Temporary Power Generator package (3 No. Containerised Generators, 1 No. Transformer & 1 No. Fuel Skid)<br>โข<br>Installation and tie-ins of diesel dis...</code> | <code>**5. Node Identification**<br><br><br>The HAZID study will be based on a single node covering the following areas:<br><br><br>**Node 1** - Temporary power generation package, electrical and diesel system tie-ins.<br><br><br>The above listed node will be subjected to team agreement at the HAZID session.<br><br><br>**6. Agenda**<br><br><br>Facilitated session of approximately 3hrs in duration, held as a Teams video conference call between<br>team members.<br><br><br>Date: **15** **[th]** **Nov 2023**<br><br><br>Time: **13:00 โ 16:00**<br><br><br>Location: **Microsoft Teams Meeting**<br><br><br>Room: **N/A**<br><br><br>The programme will be based on the suggested outline below:<br><br><br><br><br><br><br><br><br><br>\|Item<br>No\|Item\|Approximate Duration\|<br>\|---\|---\|---\|<br>\|1\|Introductions.\|5 mins\|<br>\|2\|Overview of project scope, agreement on scope and<br>extent of study.\|10 mins\|<br>\|3\|HAZID (main study)<br>- Define the scope of the node and activities to be<br>carried out.<br>- Apply guidewords.<br>- Record identified hazards and mitigations.\|145 mins*\|<br>\|4\|Summarise HAZID outcomes and agree action plan.\|10 mins\|<br>\|\|\|*...</code> |
| <code>What is the PFD value for the level indication of 26VG001 in the report?</code> | <code>**Dana Petroleum PLC**<br><br><br>**Procedure**<br><br><br>**Functional Safety Management Procedure**<br><br><br><br>Revision No. 04<br>Revision Date 01-Apr-2015<br>Page No. page 22 of 72<br><br><br><br>**Table 3 - Environmental Calibrated Risk Graph Parameters**<br><br><br><br><br><br><br><br><br><br>\|Risk<br>Parameter\|Category\|Description\|Remarks\|<br>\|---\|---\|---\|---\|<br>\|Consequenc<br>e โCโ\|C0\|No Requirements\|\|<br>\|Consequenc<br>e โCโ\|CA <br>Negligable\|Little or no known impact on<br>environment or ecosystem.<br>Contained locally with no remediation<br>required.\|\|<br>\|Consequenc<br>e โCโ\|CB <br>Minor\|Localised minimal impact on<br>environment or ecosystem. Any<br>reduced environmental quality is<br>short lived with no mitigation or<br>remedial effort required.\|\|<br>\|Consequenc<br>e โCโ\|CC <br>Significant\|Uncontained or sustained release<br>impacting on the immediate vicinity.<br>Limited damage which is easily<br>remediated over a short term\|\|<br>\|Consequenc<br>e โCโ\|CD <br>Major\|Severe lasting ecological damage to<br>immediate vicinity. Widespread<br>impact with major contribu...</code> | <code>\|Guideword\|Guideword Prompt\|Cause Prompt\|HAZID Consequence<br>Prompt\|<br>\|---\|---\|---\|---\|<br>\|~~Diving~~\|~~SIMOPS~~<br>~~Support Vessel~~<br>~~Procedures~~\|~~Equipment Failure~~<br>~~Human error~~\|~~Fatality~~\|<br>\|~~Radiation~~\|~~Ionising Radiation (LSA~~ <br>~~Scale)~~<br>~~Nucleonics~~<br>~~NDT~~<br>~~Disposal~~\|\|~~Injury~~\|<br>\|Safety Systems\|Fire & Gas Detection<br>Isolations<br>ESD<br>~~Blowdown~~<br>Passive Fire Detection<br>Active Fire Protection<br>(water & foam)<br>Fire Walls<br>Blast Walls<br>Bunds<br>Drains<br>CO2 Systems<br>Water Mist System<br>Inergen<br>Hydrants / Hose reels<br>~~Helideck Hydrants~~<br>~~HIPPS~~\|Equipment failure\|Failure to manage<br>hazards\|<br>\|~~Flaring / Venting~~\|~~Normal~~<br>~~Emergency~~<br>~~Peak Load~~<br>~~Thermal Radiation~~<br>~~Flammable Cloud~~<br>~~Toxic Cloud~~<br>~~Molecular Weight~~<br>~~Ignition Systems~~\|~~Unignited flare~~<br>~~Composition change~~<br>~~Rate change~~\|~~Exposure of personnel~~<br>~~and plant to thermal~~ <br>~~radiation, flammable...</code> |
| <code>What precautions should be taken regarding media dust following a media spare changeout?</code> | <code>**Appendix II - HAZOP Guidewords**<br><br><br>**Parameter** **Guideword**<br><br>**Flow** No / Less<br><br>More<br><br>Reverse<br><br>Misdirected<br><br>**Pressure** Less (including vacuum)<br>More (including surge, hammer and slugging)<br><br>**Temperature** Less<br><br>More<br><br>**Level** Less<br><br>(including interface) More<br><br>**Composition** As Well As (something extra)<br>(component, concentration or Part Off (something missing)<br>phase)<br><br>Other Than (something different)<br><br>**Other**<br><br>Corrosion / Erosion<br><br>Operating Mode<br>Start Up / Shutdown<br>Emergency Shutdown / Blowdown<br><br>Control of the Plant<br><br>Availability of the Plant (reliability, sparing, etc.)<br><br>Maintenance of the Plant<br><br> - condition monitoring<br><br> - inspection / testing <br> - accessibility / mechanical handling<br><br> - isolation / re-instatement<br><br> - depressurising / purging / venting<br><br> - washing / draining / gas freeing<br>Personnel Hazards (toxic gas, radiation, noise, vibration, etc.)<br><br>Lessons Le...</code> | <code>**3. HAZID Meeting**<br><br><br>**3.1** **Methodology**<br><br>The HAZID will be a multidiscipline meeting with representatives from Dana, Samphire and Katoni.<br><br><br>The approach will be one of a team based structured โbrainstormingโ using typical guidewords, as shown<br>in Appendix III - HAZID Guidewords, to prompt discussion. Hazards identified shall be risk ranked in<br>accordance with Danaโs Risk Assessment Matrix /1/.<br><br><br>**3.2** **Reporting**<br><br>The findings of the study will be recorded on Word software. A typical example of the HAZID recording<br>format is illustrated in Appendix IV - HAZID Worksheet.<br><br>When a recommendation is generated this will be clearly identified. Once all recommendations are<br>closed and approved by either the Katoni Technical Safety Engineer or Project Manager, the HAZID<br>report will be re-issued. The report will include completed responses in full and will be issued for client<br>acceptance. For a recommendation to be properly closed out, the response must be supported by<br>evidence that it h...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Evaluation Dataset
#### emb_fn
* Dataset: emb_fn
* Size: 388 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 388 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 18.14 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 118.57 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 118.84 tokens</li><li>max: 128 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-----------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What two options are available for the death/injury parameter in the safety risk graph?</code> | <code>**5.4** **Multiple causes**<br><br>Where there are multiple initiating causes of the same hazardous scenario, initiating cause likelihoods<br>should be addressed by either:<br><br><br> - Summing the initiating causes and selecting the appropriate demand rate (W1/W2/W3),<br>where the other parameters of the risk graph are identical (occupancy, probability of fatality<br>and probability of avoidance for a SIL). This is the preferred method.<br><br> - Assessing each Initiating cause separately and evaluating the IL for each one. Then a<br>judgement should be made regarding what the overall IL for the hazard should be. For<br>example, if there are many initiating causes for the same hazard assessed as SIL1, then an<br>overall SIL2 may be appropriate.<br><br><br>Page 12 of 32<br>KAT-ENG-S-TMP-0002 Version 5.0</code> | <code>\|Guideword\|Hazard\|Cause\|Consequence\|Safeguard(s)\|Ranking\|Col7\|Col8\|Col9\|Action /<br>Recommendation\|Action<br>by\|Target<br>Date\|<br>\|---\|---\|---\|---\|---\|---\|---\|---\|---\|---\|---\|---\|<br>\|**Guideword**\|**Hazard**\|**Cause**\|**Consequence**\|**Safeguard(s)**\|**Type**\|**Sev.**\|**Lik.**\|**Risk**\|**Risk**\|**Risk**\|**Risk**\|<br>\|Electricity\|\|\|\|\|\|\|\|\|\|\|\|<br>\|Ergonomics<br>hazards\|\|\|\|\|\|\|\|\|\|\|\|<br>\|Worksite\|\|\|\|\|\|\|\|\|\|\|\|<br>\|Organisation<br>and planning\|\|\|\|\|\|\|\|\|\|\|\|<br>\|Emergency<br>response\|\|\|\|\|\|\|\|\|\|\|\|<br>\|Electromagneti<br>c radiation\|\|\|\|\|\|\|\|\|\|\|\|<br>\|Ionisation<br>radiation\|\|\|\|\|\|\|\|\|\|\|\|<br>\|Asphyxiation\|\|\|\|\|\|\|\|\|\|\|\|<br>\|Toxic /<br>Carcinogens\|\|\|\|\|\|\|\|\|\|\|\|<br>\|Loss of utilities\|\|\|\|\|\|\|\|\|\|\|\|<br>\|Diving\|\|\|\|\|\|\|\|\|\|\|\|<br>\|Platform<br>intakes /<br>discharges\|\|\|\|\|\|\|\|\|\|\|\|<br>\|Waste /<br>discharge\|\|\|\|\|\|\|\|\|\|\|\|<br>\|Portable /<br>temporary<br>equipment\|\|\|\|\|\|\|\|\|\|\|\|<br>\|Work\|\|\|\|\|\|\|\|\|\|\|\|<br>\|Isolation of<br>safety systems\|\|\|\|\|\|\|\|\|\|\|\|<br>\|\|\|\|\|\|\|\|\|\|\|\|\|<br><br><br><br>Page 14 of 22<br>KAT-ENG-S-TMP-0002 Version 3.0</code> |
| <code>What does a frequency rating of 1 correspond to in the Generic '8x8' Risk Matrix?</code> | <code>Risk Assessment Matrices OMS-2A-02<br><br><br>**4.2** **Consequences โ Environmental Risks**<br><br><br><br>\|Severity\|Environmental โ Atmospheric\|Environmental โ Water\|<br>\|---\|---\|---\|<br>\|A\|An unintentional release of gas amounting to a<br>CO2 equivalent of >200,000 tonnes\|A release to sea assessed to cause damage to the receiving environment or<br>species over a period of >12 months that requires a shoreline response\|<br>\|B\|An unintentional release of gas amounting to a<br>CO2 equivalent of >100,000 tonnes\|A release to sea assessed to cause damage to the receiving environment or<br>species over a period of >12 months that does not do involve a shoreline<br>response\|<br>\|C\|An unintentional release of gas amounting to a<br>CO2 equivalent of >50,000 tonnes\|A release to sea assessed to cause damage to the receiving environment or<br>species over a period of >6 months\|<br>\|D\|An unintentional release of gas amounting to a<br>CO2 equivalent of >25,000 tonnes\|A release to sea assessed to cause damage to the receiving environm...</code> | <code>Risk Assessment Matrices OMS-2A-02<br><br><br>**4.3** **Consequences โ Business, Social and Governance Risks**<br><br><br><br><br><br><br><br>\|Severity\|Business โ Additional Guidance\|<br>\|---\|---\|<br>\|A\|Catastrophic loss of company value (e.g. catastrophic drop in share price)\|<br>\|B\|Loss of single asset or other significant / material facility to Serica<br>Significant drop in share price\|<br>\|C\|Loss of Licence to Operate Facility<br>Significant prolonged Enforcement Action / Prosecution by a regulatory body.<br>Significant prolonged adverse media coverage.<br>Loss of Shareholder confidence.\|<br>\|D\|Prosecution / civil sanctions by regulatory body e.g. HSE, BEIS or ICO Prosecution / Prohibition Notice / significant level of<br>Enforcement Action. Regulatory Investigation.<br>Prolonged adverse media coverage<br>Loss of partner confidence\|<br>\|E\|Enforcement Action by a regulatory body (or potential for prosecution), e.g. HSE Improvement Notice, ICO Enforcement Notice.<br>Reportable event which may lead to regulatory investigation.<br>Shor...</code> |
| <code>What recommendation is given regarding control system capacity?</code> | <code>\|Parameter\|Guideword\|Cause\|Consequence\|Safeguards\|Recommendation\|By\|<br>\|---\|---\|---\|---\|---\|---\|---\|<br>\|Flow\|As Well As\|Oil & solids<br>contaminated<br>heating media<br>supply.\|Potential fouling of the<br>heat exchanger and<br>reduction in efficiency.<br> <br>Potential failing of the<br>valves.\|Plate pack can be<br>removed, cleaned and<br>if necessary, replaced.\|19. Consider filtration<br>requirement for heating<br>media (design).<br>(Closed out)<br>20. Evaluate the most<br>appropriate valve<br>specification for the<br>contaminated heating<br>media (design).<br>(Closed out)\|Andrew<br>Keatings,<br>WGE(NS)<br> <br>Archie<br>Murdoch,<br>WGE(NS)\|<br>\|Pressure\|Less\|\|\|\|\|\|<br>\|Pressure\|More\|\|\|\|\|\|<br>\|Temperature\|Less\|\|\|\|\|\|<br>\|Temperature\|More\|\|\|\|\|\|<br>\|Level\|Less\|\|\|\|\|\|<br>\|Level\|More\|\|\|\|\|\|<br>\|Composition\|As Well As\|\|\|\|\|\|<br>\|Composition\|Part Off\|\|\|\|\|\|<br>\|Composition\|Other than\|\|\|\|\|\|<br>\|Other\|Corrosion\|\|\|\|\|\|<br>\|Other\|Operating<br>Mode\|\|\|\|\|\|<br>\|Other\|Start Up /<br>Shutdown\|\|\|\|\|\|<br><br><br><br>Page 24 of 39<br>KAT-ENG-S-TMP-0002 Version 1.0</code> | <code>**8. Risk graphs**<br><br><br>**8.1** **Safety risk graph**<br><br>Experience in the application of the IEC61511 process has resulted in the development of a calibrated<br>safety risk graph that allows a quick and consistent application of the assessment process. It also allows<br>the application of modifiers and IPLโs but only as factors of 10 (in line with the order of magnitude<br>approach of IEC61511. This is shown in figure 8-1. There are four parameters that have to be assessed,<br>as follows. Each of these is described in more detail in the corresponding section below.<br><br><br>**(1) Exposure** (mean number of people in hazard zone at any time, affected by manning<br>levels and hazard range and consideration of dependency with the hazard scenario)<br><br><br>**(2) Death/Injury** (2 options, selection of death or injury dependent on hazard energy<br>magnitude)<br><br><br>**(3) Possibility of avoidance of hazard** (2 options, dependent on rate of development of<br>hazard, any independent warnings that might be available; also used to accomm...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | ai-job-validation_cosine_accuracy |
|:------:|:----:|:-------------:|:---------------:|:---------------------------------:|
| -1 | -1 | - | - | 0.6314 |
| 2.0408 | 100 | 2.9918 | 2.1239 | 0.9304 |
| 4.0816 | 200 | 1.7066 | 1.4476 | 0.9665 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 5.1.0
- Transformers: 4.53.3
- PyTorch: 2.8.0+cu128
- Accelerate: 1.9.0
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
cprat/Taxi-v3
|
cprat
| 2025-08-11T16:46:10Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-11T16:46:06Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.67
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="cprat/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kittygirlhere/blockassist-bc-twitchy_beaked_coral_1754927143
|
kittygirlhere
| 2025-08-11T16:43:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"twitchy beaked coral",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T16:42:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- twitchy beaked coral
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
motza0025/blockassist-bc-prehistoric_sleek_chameleon_1754929104
|
motza0025
| 2025-08-11T16:38:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"prehistoric sleek chameleon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T16:38:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- prehistoric sleek chameleon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754929958
|
RMCian
| 2025-08-11T16:33:16Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T16:33:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Jovar1/blockassist-bc-bold_hulking_rooster_1754929892
|
Jovar1
| 2025-08-11T16:33:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bold hulking rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T16:32:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bold hulking rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754929807
|
ggozzy
| 2025-08-11T16:31:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T16:31:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dacryt/blockassist-bc-voracious_vicious_gecko_1754927767
|
Dacryt
| 2025-08-11T16:28:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"voracious vicious gecko",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T16:28:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- voracious vicious gecko
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754929472
|
RMCian
| 2025-08-11T16:25:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T16:25:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
annahahn/v3000q
|
annahahn
| 2025-08-11T16:24:03Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T15:11:43Z |
---
license: apache-2.0
---
|
jahyungu/deepseek-math-7b-instruct_LeetCodeDataset
|
jahyungu
| 2025-08-11T16:22:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:deepseek-ai/deepseek-math-7b-instruct",
"base_model:finetune:deepseek-ai/deepseek-math-7b-instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T15:17:49Z |
---
library_name: transformers
license: other
base_model: deepseek-ai/deepseek-math-7b-instruct
tags:
- generated_from_trainer
model-index:
- name: deepseek-math-7b-instruct_LeetCodeDataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deepseek-math-7b-instruct_LeetCodeDataset
This model is a fine-tuned version of [deepseek-ai/deepseek-math-7b-instruct](https://huggingface.co/deepseek-ai/deepseek-math-7b-instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
hdong0/Qwen2.5-Math-1.5B-GRPO_deepscaler_prompt1
|
hdong0
| 2025-08-11T16:21:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:agentica-org/DeepScaleR-Preview-Dataset",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-1.5B",
"base_model:finetune:Qwen/Qwen2.5-Math-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T13:30:53Z |
---
base_model: Qwen/Qwen2.5-Math-1.5B
datasets: agentica-org/DeepScaleR-Preview-Dataset
library_name: transformers
model_name: Qwen2.5-Math-1.5B-GRPO_deepscaler_prompt1
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-Math-1.5B-GRPO_deepscaler_prompt1
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) on the [agentica-org/DeepScaleR-Preview-Dataset](https://huggingface.co/datasets/agentica-org/DeepScaleR-Preview-Dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hdong0/Qwen2.5-Math-1.5B-GRPO_deepscaler_prompt1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1754927457
|
coelacanthxyz
| 2025-08-11T16:18:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T16:18:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
daslab-testing/Qwen3-1.7B-FPQuant-QAT-NVFP4-200steps
|
daslab-testing
| 2025-08-11T16:17:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"fp_quant",
"region:us"
] |
text-generation
| 2025-08-11T16:16:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bph/dpo-DialoGPT-small-debug
|
bph
| 2025-08-11T16:15:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:microsoft/DialoGPT-small",
"base_model:finetune:microsoft/DialoGPT-small",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T16:07:29Z |
---
base_model: microsoft/DialoGPT-small
library_name: transformers
model_name: dpo-DialoGPT-small-debug
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for dpo-DialoGPT-small-debug
This model is a fine-tuned version of [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bph/dpo-DialoGPT-small-debug", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/benjheymann/dpo-DialoGPT-small-debug/runs/ymg8nfuw)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1754928669
|
kapalbalap
| 2025-08-11T16:12:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T16:11:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
wildansofhal/IndoBERT-Sentiment-Analysis8v2
|
wildansofhal
| 2025-08-11T16:10:53Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:indobenchmark/indobert-base-p1",
"base_model:finetune:indobenchmark/indobert-base-p1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-11T16:10:23Z |
---
library_name: transformers
license: mit
base_model: indobenchmark/indobert-base-p1
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: IndoBERT-Sentiment-Analysis8v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IndoBERT-Sentiment-Analysis8v2
This model is a fine-tuned version of [indobenchmark/indobert-base-p1](https://huggingface.co/indobenchmark/indobert-base-p1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4324
- Accuracy: 0.9077
- F1 Score: 0.9073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score |
|:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|
| 0.6356 | 0.1096 | 50 | 0.6418 | 0.65 | 0.6481 |
| 0.6133 | 0.2193 | 100 | 0.5997 | 0.6782 | 0.6750 |
| 0.5781 | 0.3289 | 150 | 0.5178 | 0.7449 | 0.7445 |
| 0.5433 | 0.4386 | 200 | 0.4351 | 0.8051 | 0.8051 |
| 0.4154 | 0.5482 | 250 | 0.4331 | 0.8026 | 0.8019 |
| 0.467 | 0.6579 | 300 | 0.3819 | 0.8462 | 0.8459 |
| 0.3623 | 0.7675 | 350 | 0.4463 | 0.8410 | 0.8397 |
| 0.3316 | 0.8772 | 400 | 0.4174 | 0.8551 | 0.8548 |
| 0.3407 | 0.9868 | 450 | 0.5784 | 0.8141 | 0.8101 |
| 0.2882 | 1.0965 | 500 | 0.4091 | 0.8769 | 0.8768 |
| 0.2379 | 1.2061 | 550 | 0.5138 | 0.8603 | 0.8590 |
| 0.2828 | 1.3158 | 600 | 0.5102 | 0.8744 | 0.8730 |
| 0.2148 | 1.4254 | 650 | 0.4847 | 0.8833 | 0.8824 |
| 0.262 | 1.5351 | 700 | 0.4366 | 0.8987 | 0.8981 |
| 0.3484 | 1.6447 | 750 | 0.3786 | 0.9090 | 0.9086 |
| 0.1367 | 1.7544 | 800 | 0.4582 | 0.8949 | 0.8942 |
| 0.2344 | 1.8640 | 850 | 0.4343 | 0.9064 | 0.9060 |
| 0.2519 | 1.9737 | 900 | 0.4315 | 0.9077 | 0.9073 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Bijima/mistral-7b-modern-npc
|
Bijima
| 2025-08-11T16:09:56Z | 0 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T15:12:15Z |
---
license: apache-2.0
---
|
SvalTek/ColdBrew-12B-Nemo-test2
|
SvalTek
| 2025-08-11T16:05:24Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"merge",
"lazymergekit",
"region:us"
] | null | 2025-08-11T16:00:44Z |
---
base_model: SvalTek/Qwen3-ColdBrew-14B
tags:
- merge
- lazymergekit
---
# ColdBrew-12B-Nemo-test2
## ๐งฉ Configuration
```yaml
name: ColdBrew-12B-Nemo-test2
models:
- model: SvalTek/ColdBrew-12B-Nemo-test1
parameters:
weight: 1.0
- model: elinas/Chronos-Gold-12B-1.0
parameters:
weight: 0.3
base_model: SvalTek/ColdBrew-12B-Nemo-test1
merge_method: task_arithmetic
chat_template: "chatml"
tokenizer:
source: union # keep everyoneโs vocab; union is a documented option
tokens:
"<|im_start|>":
source: "elinas/Chronos-Gold-12B-1.0"
force: true
"<|im_end|>":
source: "elinas/Chronos-Gold-12B-1.0"
force: true
dtype: bfloat16
normalize: true
int8_mask: true
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "SvalTek/ColdBrew-12B-Nemo-test2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
BinBashir/SmallNaijaBert_on_jumia_dataset
|
BinBashir
| 2025-08-11T16:04:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-11T16:04:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
afasdfdfadsf/blockassist-bc-exotic_slimy_horse_1754928140
|
afasdfdfadsf
| 2025-08-11T16:04:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"exotic slimy horse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T16:03:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- exotic slimy horse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754927862
|
IvanJAjebu
| 2025-08-11T15:59:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T15:58:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
apriasmoro/320b708d-ec59-47ef-94a4-9b7a16694a01
|
apriasmoro
| 2025-08-11T15:59:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"axolotl",
"conversational",
"arxiv:2402.03300",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T15:58:53Z |
---
library_name: transformers
model_name: app/checkpoints/4485b45c-26b1-485f-b391-5493eea942f6/320b708d-ec59-47ef-94a4-9b7a16694a01
tags:
- generated_from_trainer
- trl
- grpo
- axolotl
licence: license
---
# Model Card for app/checkpoints/4485b45c-26b1-485f-b391-5493eea942f6/320b708d-ec59-47ef-94a4-9b7a16694a01
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
New-Clip-Lil-tay-viral-video-Link-on/Exclusive.Orginal.full.Videos.Lil.tay.Lil.tay.viral.video.Official.Tutorial
|
New-Clip-Lil-tay-viral-video-Link-on
| 2025-08-11T15:57:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T15:57:40Z |
<a href="https://sdu.sk/Kyl"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="fsd" /></a>
<a href="https://sdu.sk/Kyl" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/Kyl" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
lil-tay-viral-video-new-link/Orginal.18.full.Videos.lil.tay.viral.video.Official
|
lil-tay-viral-video-new-link
| 2025-08-11T15:53:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T15:53:13Z |
<a href="https://sdu.sk/Kyl"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="fsd" /></a>
<a href="https://sdu.sk/Kyl" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/Kyl" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
Jovar1/blockassist-bc-bold_hulking_rooster_1754927461
|
Jovar1
| 2025-08-11T15:52:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bold hulking rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T15:51:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bold hulking rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ArtusDev/baichuan-inc_Baichuan-M2-32B-EXL3
|
ArtusDev
| 2025-08-11T15:50:57Z | 0 | 0 |
transformers
|
[
"transformers",
"chat",
"exl3",
"en",
"zh",
"base_model:baichuan-inc/Baichuan-M2-32B",
"base_model:quantized:baichuan-inc/Baichuan-M2-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T13:39:56Z |
---
base_model: baichuan-inc/Baichuan-M2-32B
base_model_relation: quantized
quantized_by: ArtusDev
license: apache-2.0
tags:
- chat
- exl3
library_name: transformers
language:
- en
- zh
---
## EXL3 Quants of baichuan-inc/Baichuan-M2-32B
EXL3 quants of [baichuan-inc/Baichuan-M2-32B](https://huggingface.co/baichuan-inc/Baichuan-M2-32B) using <a href="https://github.com/turboderp-org/exllamav3/">exllamav3</a> for quantization.
### Quants
| Quant(Revision) | Bits per Weight | Head Bits |
| -------- | ---------- | --------- |
| [2.5_H6](https://huggingface.co/ArtusDev/baichuan-inc_Baichuan-M2-32B-EXL3/tree/2.5bpw_H6) | 2.5 | 6 |
| [3.0_H6](https://huggingface.co/ArtusDev/baichuan-inc_Baichuan-M2-32B-EXL3/tree/3.0bpw_H6) | 3.0 | 6 |
| [3.5_H6](https://huggingface.co/ArtusDev/baichuan-inc_Baichuan-M2-32B-EXL3/tree/3.5bpw_H6) | 3.5 | 6 |
| [4.0_H6](https://huggingface.co/ArtusDev/baichuan-inc_Baichuan-M2-32B-EXL3/tree/4.0bpw_H6) | 4.0 | 6 |
| [4.5_H6](https://huggingface.co/ArtusDev/baichuan-inc_Baichuan-M2-32B-EXL3/tree/4.5bpw_H6) | 4.5 | 6 |
| [5.0_H6](https://huggingface.co/ArtusDev/baichuan-inc_Baichuan-M2-32B-EXL3/tree/5.0bpw_H6) | 5.0 | 6 |
| [6.0_H6](https://huggingface.co/ArtusDev/baichuan-inc_Baichuan-M2-32B-EXL3/tree/6.0bpw_H6) | 6.0 | 6 |
| [8.0_H8](https://huggingface.co/ArtusDev/baichuan-inc_Baichuan-M2-32B-EXL3/tree/8.0bpw_H8) | 8.0 | 8 |
### Downloading quants with huggingface-cli
<details>
<summary>Click to view download instructions</summary>
Install hugginface-cli:
```bash
pip install -U "huggingface_hub[cli]"
```
Download quant by targeting the specific quant revision (branch):
```
huggingface-cli download ArtusDev/baichuan-inc_Baichuan-M2-32B-EXL3 --revision "5.0bpw_H6" --local-dir ./
```
</details>
|
yoriis/ce-task-70
|
yoriis
| 2025-08-11T15:49:56Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"cross-encoder",
"reranker",
"generated_from_trainer",
"dataset_size:14287",
"loss:BinaryCrossEntropyLoss",
"text-ranking",
"arxiv:1908.10084",
"base_model:yoriis/ce-final",
"base_model:finetune:yoriis/ce-final",
"model-index",
"region:us"
] |
text-ranking
| 2025-08-11T15:49:26Z |
---
tags:
- sentence-transformers
- cross-encoder
- reranker
- generated_from_trainer
- dataset_size:14287
- loss:BinaryCrossEntropyLoss
base_model: yoriis/ce-final
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- accuracy
- accuracy_threshold
- f1
- f1_threshold
- precision
- recall
- average_precision
model-index:
- name: CrossEncoder based on yoriis/ce-final
results:
- task:
type: cross-encoder-classification
name: Cross Encoder Classification
dataset:
name: eval
type: eval
metrics:
- type: accuracy
value: 0.9767002518891688
name: Accuracy
- type: accuracy_threshold
value: 0.6093786954879761
name: Accuracy Threshold
- type: f1
value: 0.8514056224899598
name: F1
- type: f1_threshold
value: 0.08044017106294632
name: F1 Threshold
- type: precision
value: 0.8412698412698413
name: Precision
- type: recall
value: 0.8617886178861789
name: Recall
- type: average_precision
value: 0.8904592423807994
name: Average Precision
---
# CrossEncoder based on yoriis/ce-final
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [yoriis/ce-final](https://huggingface.co/yoriis/ce-final) using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [yoriis/ce-final](https://huggingface.co/yoriis/ce-final) <!-- at revision 83b2db24dab0f081cc808ae8789a4d5469c79682 -->
- **Maximum Sequence Length:** 512 tokens
- **Number of Output Labels:** 1 label
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the ๐ค Hub
model = CrossEncoder("yoriis/ce-task-70")
# Get scores for pairs of texts
pairs = [
['ู
ุง ุงูู
ุฎูููุงุช ุงูุชู ุชุณุจุญ ุงูููุ', 'ูุง ุจูู ุขุฏู
ุฅู
ุง ูุฃุชูููู
ุฑุณู ู
ููู
ููุตูู ุนูููู
ุขูุงุชู ูู
ู ุงุชูู ูุฃุตูุญ ููุง ุฎูู ุนูููู
ููุง ูู
ูุญุฒููู. ูุงูุฐูู ูุฐุจูุง ุจุขูุงุชูุง ูุงุณุชูุจุฑูุง ุนููุง ุฃููุฆู ุฃุตุญุงุจ ุงููุงุฑ ูู
ูููุง ุฎุงูุฏูู. ูู
ู ุฃุธูู
ู
ู
ู ุงูุชุฑู ุนูู ุงููู ูุฐุจุง ุฃู ูุฐุจ ุจุขูุงุชู ุฃููุฆู ููุงููู
ูุตูุจูู
ู
ู ุงููุชุงุจ ุญุชู ุฅุฐุง ุฌุงุกุชูู
ุฑุณููุง ูุชูููููู
ูุงููุง ุฃูู ู
ุง ููุชู
ุชุฏุนูู ู
ู ุฏูู ุงููู ูุงููุง ุถููุง ุนูุง ูุดูุฏูุง ุนูู ุฃููุณูู
ุฃููู
ูุงููุง ูุงูุฑูู.'],
['ุงุชูู
ุงููุฑุขู ุจุฃูู ุงูุณุจุจ ูู ุงูุฏูุชุงุชูุฑูุฉ ุงูุฅุณูุงู
ูุฉ ููููู ุฃุจุงุญ ุถุฑุจ ุงููุณุงุก ูู ุญุงูุฉ ุงููุดูุฒุ ููู ูุฑุฏ ุนูู ุฐููุ', 'ุฅุฐ ูุงู ุงููู ูุง ุนูุณู ุงุจู ู
ุฑูู
ุงุฐูุฑ ูุนู
ุชู ุนููู ูุนูู ูุงูุฏุชู ุฅุฐ ุฃูุฏุชู ุจุฑูุญ ุงููุฏุณ ุชููู
ุงููุงุณ ูู ุงูู
ูุฏ ููููุง ูุฅุฐ ุนูู
ุชู ุงููุชุงุจ ูุงูุญูู
ุฉ ูุงูุชูุฑุงุฉ ูุงูุฅูุฌูู ูุฅุฐ ุชุฎูู ู
ู ุงูุทูู ูููุฆุฉ ุงูุทูุฑ ุจุฅุฐูู ูุชููุฎ ูููุง ูุชููู ุทูุฑุง ุจุฅุฐูู ูุชุจุฑุฆ ุงูุฃูู
ู ูุงูุฃุจุฑุต ุจุฅุฐูู ูุฅุฐ ุชุฎุฑุฌ ุงูู
ูุชู ุจุฅุฐูู ูุฅุฐ ูููุช ุจูู ุฅุณุฑุงุฆูู ุนูู ุฅุฐ ุฌุฆุชูู
ุจุงูุจููุงุช ููุงู ุงูุฐูู ููุฑูุง ู
ููู
ุฅู ูุฐุง ุฅูุง ุณุญุฑ ู
ุจูู. ูุฅุฐ ุฃูุญูุช ุฅูู ุงูุญูุงุฑููู ุฃู ุขู
ููุง ุจู ูุจุฑุณููู ูุงููุง ุขู
ูุง ูุงุดูุฏ ุจุฃููุง ู
ุณูู
ูู.'],
['ู
ุง ูู ุงูุฌูุงุฏุ', '[PASSAGE_NOT_FOUND]'],
['ูู ูุงู ุณูุฏูุง ููุณู ุนููู ุงูุณูุงู
ุฑุณููุง ุฃู
ูุจูุงุ', 'ุงูุฑุฌุงู ููุงู
ูู ุนูู ุงููุณุงุก ุจู
ุง ูุถู ุงููู ุจุนุถูู
ุนูู ุจุนุถ ูุจู
ุง ุฃููููุง ู
ู ุฃู
ูุงููู
ูุงูุตุงูุญุงุช ูุงูุชุงุช ุญุงูุธุงุช ููุบูุจ ุจู
ุง ุญูุธ ุงููู ูุงููุงุชู ุชุฎุงููู ูุดูุฒูู ูุนุธููู ูุงูุฌุฑููู ูู ุงูู
ุถุงุฌุน ูุงุถุฑุจููู ูุฅู ุฃุทุนููู
ููุง ุชุจุบูุง ุนูููู ุณุจููุง ุฅู ุงููู ูุงู ุนููุง ูุจูุฑุง. ูุฅู ุฎูุชู
ุดูุงู ุจูููู
ุง ูุงุจุนุซูุง ุญูู
ุง ู
ู ุฃููู ูุญูู
ุง ู
ู ุฃูููุง ุฅู ูุฑูุฏุง ุฅุตูุงุญุง ูููู ุงููู ุจูููู
ุง ุฅู ุงููู ูุงู ุนููู
ุง ุฎุจูุฑุง.'],
['ู
ุง ูู ุงูู
ูุงูุน ุงูุตุญูุฉ ูุตูุงุฉ ุงููุฌุฑุ', 'ููุงู ุงููู ูุง ุชุชุฎุฐูุง ุฅูููู ุงุซููู ุฅูู
ุง ูู ุฅูู ูุงุญุฏ ูุฅูุงู ูุงุฑูุจูู. ููู ู
ุง ูู ุงูุณู
ุงูุงุช ูุงูุฃุฑุถ ููู ุงูุฏูู ูุงุตุจุง ุฃูุบูุฑ ุงููู ุชุชููู. ูู
ุง ุจูู
ู
ู ูุนู
ุฉ ูู
ู ุงููู ุซู
ุฅุฐุง ู
ุณูู
ุงูุถุฑ ูุฅููู ุชุฌุฃุฑูู. ุซู
ุฅุฐุง ูุดู ุงูุถุฑ ุนููู
ุฅุฐุง ูุฑูู ู
ููู
ุจุฑุจูู
ูุดุฑููู. ููููุฑูุง ุจู
ุง ุขุชููุงูู
ูุชู
ุชุนูุง ูุณูู ุชุนูู
ูู.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'ู
ุง ุงูู
ุฎูููุงุช ุงูุชู ุชุณุจุญ ุงูููุ',
[
'ูุง ุจูู ุขุฏู
ุฅู
ุง ูุฃุชูููู
ุฑุณู ู
ููู
ููุตูู ุนูููู
ุขูุงุชู ูู
ู ุงุชูู ูุฃุตูุญ ููุง ุฎูู ุนูููู
ููุง ูู
ูุญุฒููู. ูุงูุฐูู ูุฐุจูุง ุจุขูุงุชูุง ูุงุณุชูุจุฑูุง ุนููุง ุฃููุฆู ุฃุตุญุงุจ ุงููุงุฑ ูู
ูููุง ุฎุงูุฏูู. ูู
ู ุฃุธูู
ู
ู
ู ุงูุชุฑู ุนูู ุงููู ูุฐุจุง ุฃู ูุฐุจ ุจุขูุงุชู ุฃููุฆู ููุงููู
ูุตูุจูู
ู
ู ุงููุชุงุจ ุญุชู ุฅุฐุง ุฌุงุกุชูู
ุฑุณููุง ูุชูููููู
ูุงููุง ุฃูู ู
ุง ููุชู
ุชุฏุนูู ู
ู ุฏูู ุงููู ูุงููุง ุถููุง ุนูุง ูุดูุฏูุง ุนูู ุฃููุณูู
ุฃููู
ูุงููุง ูุงูุฑูู.',
'ุฅุฐ ูุงู ุงููู ูุง ุนูุณู ุงุจู ู
ุฑูู
ุงุฐูุฑ ูุนู
ุชู ุนููู ูุนูู ูุงูุฏุชู ุฅุฐ ุฃูุฏุชู ุจุฑูุญ ุงููุฏุณ ุชููู
ุงููุงุณ ูู ุงูู
ูุฏ ููููุง ูุฅุฐ ุนูู
ุชู ุงููุชุงุจ ูุงูุญูู
ุฉ ูุงูุชูุฑุงุฉ ูุงูุฅูุฌูู ูุฅุฐ ุชุฎูู ู
ู ุงูุทูู ูููุฆุฉ ุงูุทูุฑ ุจุฅุฐูู ูุชููุฎ ูููุง ูุชููู ุทูุฑุง ุจุฅุฐูู ูุชุจุฑุฆ ุงูุฃูู
ู ูุงูุฃุจุฑุต ุจุฅุฐูู ูุฅุฐ ุชุฎุฑุฌ ุงูู
ูุชู ุจุฅุฐูู ูุฅุฐ ูููุช ุจูู ุฅุณุฑุงุฆูู ุนูู ุฅุฐ ุฌุฆุชูู
ุจุงูุจููุงุช ููุงู ุงูุฐูู ููุฑูุง ู
ููู
ุฅู ูุฐุง ุฅูุง ุณุญุฑ ู
ุจูู. ูุฅุฐ ุฃูุญูุช ุฅูู ุงูุญูุงุฑููู ุฃู ุขู
ููุง ุจู ูุจุฑุณููู ูุงููุง ุขู
ูุง ูุงุดูุฏ ุจุฃููุง ู
ุณูู
ูู.',
'[PASSAGE_NOT_FOUND]',
'ุงูุฑุฌุงู ููุงู
ูู ุนูู ุงููุณุงุก ุจู
ุง ูุถู ุงููู ุจุนุถูู
ุนูู ุจุนุถ ูุจู
ุง ุฃููููุง ู
ู ุฃู
ูุงููู
ูุงูุตุงูุญุงุช ูุงูุชุงุช ุญุงูุธุงุช ููุบูุจ ุจู
ุง ุญูุธ ุงููู ูุงููุงุชู ุชุฎุงููู ูุดูุฒูู ูุนุธููู ูุงูุฌุฑููู ูู ุงูู
ุถุงุฌุน ูุงุถุฑุจููู ูุฅู ุฃุทุนููู
ููุง ุชุจุบูุง ุนูููู ุณุจููุง ุฅู ุงููู ูุงู ุนููุง ูุจูุฑุง. ูุฅู ุฎูุชู
ุดูุงู ุจูููู
ุง ูุงุจุนุซูุง ุญูู
ุง ู
ู ุฃููู ูุญูู
ุง ู
ู ุฃูููุง ุฅู ูุฑูุฏุง ุฅุตูุงุญุง ูููู ุงููู ุจูููู
ุง ุฅู ุงููู ูุงู ุนููู
ุง ุฎุจูุฑุง.',
'ููุงู ุงููู ูุง ุชุชุฎุฐูุง ุฅูููู ุงุซููู ุฅูู
ุง ูู ุฅูู ูุงุญุฏ ูุฅูุงู ูุงุฑูุจูู. ููู ู
ุง ูู ุงูุณู
ุงูุงุช ูุงูุฃุฑุถ ููู ุงูุฏูู ูุงุตุจุง ุฃูุบูุฑ ุงููู ุชุชููู. ูู
ุง ุจูู
ู
ู ูุนู
ุฉ ูู
ู ุงููู ุซู
ุฅุฐุง ู
ุณูู
ุงูุถุฑ ูุฅููู ุชุฌุฃุฑูู. ุซู
ุฅุฐุง ูุดู ุงูุถุฑ ุนููู
ุฅุฐุง ูุฑูู ู
ููู
ุจุฑุจูู
ูุดุฑููู. ููููุฑูุง ุจู
ุง ุขุชููุงูู
ูุชู
ุชุนูุง ูุณูู ุชุนูู
ูู.',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Classification
* Dataset: `eval`
* Evaluated with [<code>CrossEncoderClassificationEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderClassificationEvaluator)
| Metric | Value |
|:----------------------|:-----------|
| accuracy | 0.9767 |
| accuracy_threshold | 0.6094 |
| f1 | 0.8514 |
| f1_threshold | 0.0804 |
| precision | 0.8413 |
| recall | 0.8618 |
| **average_precision** | **0.8905** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 14,287 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 11 characters</li><li>mean: 41.23 characters</li><li>max: 201 characters</li></ul> | <ul><li>min: 19 characters</li><li>mean: 213.75 characters</li><li>max: 1086 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.08</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:---------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>ู
ุง ุงูู
ุฎูููุงุช ุงูุชู ุชุณุจุญ ุงูููุ</code> | <code>ูุง ุจูู ุขุฏู
ุฅู
ุง ูุฃุชูููู
ุฑุณู ู
ููู
ููุตูู ุนูููู
ุขูุงุชู ูู
ู ุงุชูู ูุฃุตูุญ ููุง ุฎูู ุนูููู
ููุง ูู
ูุญุฒููู. ูุงูุฐูู ูุฐุจูุง ุจุขูุงุชูุง ูุงุณุชูุจุฑูุง ุนููุง ุฃููุฆู ุฃุตุญุงุจ ุงููุงุฑ ูู
ูููุง ุฎุงูุฏูู. ูู
ู ุฃุธูู
ู
ู
ู ุงูุชุฑู ุนูู ุงููู ูุฐุจุง ุฃู ูุฐุจ ุจุขูุงุชู ุฃููุฆู ููุงููู
ูุตูุจูู
ู
ู ุงููุชุงุจ ุญุชู ุฅุฐุง ุฌุงุกุชูู
ุฑุณููุง ูุชูููููู
ูุงููุง ุฃูู ู
ุง ููุชู
ุชุฏุนูู ู
ู ุฏูู ุงููู ูุงููุง ุถููุง ุนูุง ูุดูุฏูุง ุนูู ุฃููุณูู
ุฃููู
ูุงููุง ูุงูุฑูู.</code> | <code>0.0</code> |
| <code>ุงุชูู
ุงููุฑุขู ุจุฃูู ุงูุณุจุจ ูู ุงูุฏูุชุงุชูุฑูุฉ ุงูุฅุณูุงู
ูุฉ ููููู ุฃุจุงุญ ุถุฑุจ ุงููุณุงุก ูู ุญุงูุฉ ุงููุดูุฒุ ููู ูุฑุฏ ุนูู ุฐููุ</code> | <code>ุฅุฐ ูุงู ุงููู ูุง ุนูุณู ุงุจู ู
ุฑูู
ุงุฐูุฑ ูุนู
ุชู ุนููู ูุนูู ูุงูุฏุชู ุฅุฐ ุฃูุฏุชู ุจุฑูุญ ุงููุฏุณ ุชููู
ุงููุงุณ ูู ุงูู
ูุฏ ููููุง ูุฅุฐ ุนูู
ุชู ุงููุชุงุจ ูุงูุญูู
ุฉ ูุงูุชูุฑุงุฉ ูุงูุฅูุฌูู ูุฅุฐ ุชุฎูู ู
ู ุงูุทูู ูููุฆุฉ ุงูุทูุฑ ุจุฅุฐูู ูุชููุฎ ูููุง ูุชููู ุทูุฑุง ุจุฅุฐูู ูุชุจุฑุฆ ุงูุฃูู
ู ูุงูุฃุจุฑุต ุจุฅุฐูู ูุฅุฐ ุชุฎุฑุฌ ุงูู
ูุชู ุจุฅุฐูู ูุฅุฐ ูููุช ุจูู ุฅุณุฑุงุฆูู ุนูู ุฅุฐ ุฌุฆุชูู
ุจุงูุจููุงุช ููุงู ุงูุฐูู ููุฑูุง ู
ููู
ุฅู ูุฐุง ุฅูุง ุณุญุฑ ู
ุจูู. ูุฅุฐ ุฃูุญูุช ุฅูู ุงูุญูุงุฑููู ุฃู ุขู
ููุง ุจู ูุจุฑุณููู ูุงููุง ุขู
ูุง ูุงุดูุฏ ุจุฃููุง ู
ุณูู
ูู.</code> | <code>0.0</code> |
| <code>ู
ุง ูู ุงูุฌูุงุฏุ</code> | <code>[PASSAGE_NOT_FOUND]</code> | <code>0.0</code> |
* Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
```json
{
"activation_fn": "torch.nn.modules.linear.Identity",
"pos_weight": null
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `num_train_epochs`: 4
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | eval_average_precision |
|:------:|:----:|:-------------:|:----------------------:|
| 0.2800 | 500 | 0.181 | 0.8232 |
| 0.5599 | 1000 | 0.1431 | 0.8457 |
| 0.8399 | 1500 | 0.116 | 0.8569 |
| 1.0 | 1786 | - | 0.8621 |
| 1.1198 | 2000 | 0.1187 | 0.8696 |
| 1.3998 | 2500 | 0.1166 | 0.8764 |
| 1.6797 | 3000 | 0.1126 | 0.8871 |
| 1.9597 | 3500 | 0.1155 | 0.8902 |
| 2.0 | 3572 | - | 0.8852 |
| 2.2396 | 4000 | 0.0905 | 0.8877 |
| 2.5196 | 4500 | 0.1201 | 0.8886 |
| 2.7996 | 5000 | 0.0995 | 0.8901 |
| 3.0 | 5358 | - | 0.8898 |
| 3.0795 | 5500 | 0.0836 | 0.8882 |
| 3.3595 | 6000 | 0.0726 | 0.8867 |
| 3.6394 | 6500 | 0.1126 | 0.8919 |
| 3.9194 | 7000 | 0.0827 | 0.8903 |
| 4.0 | 7144 | - | 0.8905 |
### Framework Versions
- Python: 3.11.13
- Sentence Transformers: 5.0.0
- Transformers: 4.55.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.9.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Kriptoreis53/blockassist-bc-hardy_nimble_cow_1754927272
|
Kriptoreis53
| 2025-08-11T15:49:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hardy nimble cow",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T15:49:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hardy nimble cow
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
indian-college-girl-viral-mms-video-clip/Hot.Indian.College.Girl.Viral.Mms.Video.with.Teacher.at.College.Room.video
|
indian-college-girl-viral-mms-video-clip
| 2025-08-11T15:46:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T15:46:02Z |
<a href="https://sdu.sk/Kyl"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="fsd" /></a>
<a href="https://sdu.sk/Kyl" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/Kyl" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
tiny-random/glm-4.5v
|
tiny-random
| 2025-08-11T15:45:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"glm4v_moe",
"image-text-to-text",
"conversational",
"base_model:zai-org/GLM-4.5V",
"base_model:finetune:zai-org/GLM-4.5V",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-11T15:45:16Z |
---
library_name: transformers
pipeline_tag: image-text-to-text
inference: true
widget:
- text: Hello!
example_title: Hello world
group: Python
base_model:
- zai-org/GLM-4.5V
---
This tiny model is for debugging. It is randomly initialized with the config adapted from [zai-org/GLM-4.5V](https://huggingface.co/zai-org/GLM-4.5V).
### Example usage:
```python
import torch
from transformers import AutoProcessor, Glm4vMoeForConditionalGeneration
model_id = "tiny-random/glm-4.5v"
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"url": "https://upload.wikimedia.org/wikipedia/commons/f/fa/Grayscale_8bits_palette_sample_image.png"
},
{
"type": "text",
"text": "describe this image"
}
],
}
]
processor = AutoProcessor.from_pretrained(model_id)
model = Glm4vMoeForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
inputs = processor.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt"
).to(model.device)
inputs.pop("token_type_ids", None)
generated_ids = model.generate(**inputs, max_new_tokens=16)
output_text = processor.decode(generated_ids[0][inputs["input_ids"].shape[1]:], skip_special_tokens=False)
print(output_text)
```
### Codes to create this repo:
```python
import json
from pathlib import Path
import accelerate
import torch
from huggingface_hub import file_exists, hf_hub_download
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoProcessor,
GenerationConfig,
Glm4vForConditionalGeneration,
Glm4vMoeForConditionalGeneration,
set_seed,
)
from transformers.models.glm4v_moe.modeling_glm4v_moe import Glm4vMoeTextTopkRouter
source_model_id = "zai-org/GLM-4.5V"
save_folder = "/tmp/tiny-random/glm-4.5v"
processor = AutoProcessor.from_pretrained(source_model_id, trust_remote_code=True)
processor.save_pretrained(save_folder)
with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f:
config_json = json.load(f)
config_json['text_config'].update({
"hidden_size": 32,
"head_dim": 32,
"intermediate_size": 128,
"first_k_dense_replace": 1,
"moe_intermediate_size": 64,
"num_attention_heads": 2,
"num_key_value_heads": 1,
"num_hidden_layers": 2, # one dense, one moe
"tie_word_embeddings": True,
})
config_json['text_config']['rope_scaling']['mrope_section'] = [2, 2, 4]
config_json['vision_config']['hidden_size'] = 64
config_json['vision_config']['depth'] = 2
config_json['vision_config']['num_heads'] = 2
config_json['vision_config']['intermediate_size'] = 128
config_json['vision_config']['out_hidden_size'] = config_json['text_config']['hidden_size']
with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f:
json.dump(config_json, f, indent=2)
config = AutoConfig.from_pretrained(
save_folder,
trust_remote_code=True,
)
print(config)
torch.set_default_dtype(torch.bfloat16)
model = Glm4vMoeForConditionalGeneration(config)
torch.set_default_dtype(torch.float32)
if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'):
model.generation_config = GenerationConfig.from_pretrained(
source_model_id, trust_remote_code=True,
)
set_seed(42)
model = model.cpu() # cpu is more stable for random initialization across machines
num_params = sum(p.numel() for p in model.parameters())
with torch.no_grad():
for name, p in sorted(model.named_parameters()):
torch.nn.init.normal_(p, 0, 0.1)
print(name, p.shape, p.dtype, p.device, f'{p.numel() / num_params * 100: .2f}%')
for _, m in sorted(model.named_modules()):
if isinstance(m, Glm4vMoeTextTopkRouter):
assert 'e_score_correction_bias' in m.state_dict()
torch.nn.init.normal_(m.e_score_correction_bias, 0, 1)
model.save_pretrained(save_folder)
print(model)
```
### Printing the model:
```text
Glm4vMoeForConditionalGeneration(
(model): Glm4vMoeModel(
(visual): Glm4vMoeVisionModel(
(embeddings): Glm4vMoeVisionEmbeddings(
(position_embedding): Embedding(576, 64)
)
(patch_embed): Glm4vMoeVisionPatchEmbed(
(proj): Conv3d(3, 64, kernel_size=(2, 14, 14), stride=(2, 14, 14))
)
(rotary_pos_emb): Glm4vMoeVisionRotaryEmbedding()
(blocks): ModuleList(
(0-1): 2 x Glm4vMoeVisionBlock(
(norm1): Glm4vMoeRMSNorm((64,), eps=1e-05)
(norm2): Glm4vMoeRMSNorm((64,), eps=1e-05)
(attn): Glm4vMoeVisionAttention(
(qkv): Linear(in_features=64, out_features=192, bias=False)
(proj): Linear(in_features=64, out_features=64, bias=False)
)
(mlp): Glm4vMoeisionMlp(
(gate_proj): Linear(in_features=64, out_features=32, bias=False)
(up_proj): Linear(in_features=64, out_features=32, bias=False)
(down_proj): Linear(in_features=32, out_features=64, bias=False)
(act_fn): SiLU()
)
)
)
(merger): Glm4vMoeVisionPatchMerger(
(proj): Linear(in_features=32, out_features=32, bias=False)
(post_projection_norm): LayerNorm((32,), eps=1e-05, elementwise_affine=True)
(gate_proj): Linear(in_features=32, out_features=128, bias=False)
(up_proj): Linear(in_features=32, out_features=128, bias=False)
(down_proj): Linear(in_features=128, out_features=32, bias=False)
(act1): GELU(approximate='none')
(act_fn): SiLU()
)
(post_conv_layernorm): Glm4vMoeRMSNorm((64,), eps=1e-05)
(downsample): Conv2d(64, 32, kernel_size=(2, 2), stride=(2, 2))
(post_layernorm): Glm4vMoeRMSNorm((64,), eps=1e-05)
)
(language_model): Glm4vMoeTextModel(
(embed_tokens): Embedding(151552, 32, padding_idx=151329)
(layers): ModuleList(
(0): Glm4vMoeTextDecoderLayer(
(self_attn): Glm4vMoeTextAttention(
(q_proj): Linear(in_features=32, out_features=64, bias=True)
(k_proj): Linear(in_features=32, out_features=32, bias=True)
(v_proj): Linear(in_features=32, out_features=32, bias=True)
(o_proj): Linear(in_features=64, out_features=32, bias=False)
)
(mlp): Glm4vMoeTextMLP(
(gate_proj): Linear(in_features=32, out_features=128, bias=False)
(up_proj): Linear(in_features=32, out_features=128, bias=False)
(down_proj): Linear(in_features=128, out_features=32, bias=False)
(act_fn): SiLU()
)
(input_layernorm): Glm4vMoeTextRMSNorm((32,), eps=1e-05)
(post_attention_layernorm): Glm4vMoeTextRMSNorm((32,), eps=1e-05)
)
(1): Glm4vMoeTextDecoderLayer(
(self_attn): Glm4vMoeTextAttention(
(q_proj): Linear(in_features=32, out_features=64, bias=True)
(k_proj): Linear(in_features=32, out_features=32, bias=True)
(v_proj): Linear(in_features=32, out_features=32, bias=True)
(o_proj): Linear(in_features=64, out_features=32, bias=False)
)
(mlp): Glm4vMoeTextMoE(
(experts): ModuleList(
(0-127): 128 x Glm4vMoeTextMLP(
(gate_proj): Linear(in_features=32, out_features=64, bias=False)
(up_proj): Linear(in_features=32, out_features=64, bias=False)
(down_proj): Linear(in_features=64, out_features=32, bias=False)
(act_fn): SiLU()
)
)
(gate): Glm4vMoeTextTopkRouter()
(shared_experts): Glm4vMoeTextMLP(
(gate_proj): Linear(in_features=32, out_features=64, bias=False)
(up_proj): Linear(in_features=32, out_features=64, bias=False)
(down_proj): Linear(in_features=64, out_features=32, bias=False)
(act_fn): SiLU()
)
)
(input_layernorm): Glm4vMoeTextRMSNorm((32,), eps=1e-05)
(post_attention_layernorm): Glm4vMoeTextRMSNorm((32,), eps=1e-05)
)
)
(norm): Glm4vMoeRMSNorm((32,), eps=1e-05)
(rotary_emb): Glm4vMoeTextRotaryEmbedding()
)
)
(lm_head): Linear(in_features=32, out_features=151552, bias=False)
)
```
|
SoFairOA/software-mentions-models
|
SoFairOA
| 2025-08-11T15:44:36Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-13T10:47:05Z |
---
license: apache-2.0
---
# Softcite models developed in the SoFAIR EU Project
The goal of this GROBID module is to recognize any software mentions in scholar textual documents, publisher XML and PDF.
It uses as training data the Softcite Dataset developed by James Howison Lab at the University of Texas at Austin.
This annotated corpus and the present software text mining component have been developed supported by a grant from the Alfred P. Sloan foundation to improve credit for research software.
Github: https://github.com/softcite/software-mentions
Original author: Patrice Lopez
Current authors: SoFAIR Project
These models have been migrated from AWS S3.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.