modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 00:36:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 535
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 00:36:49
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Nadahass/MREG-LORA8-13b-4batches-300Steps-q8-lr3e-4
|
Nadahass
| 2024-05-08T17:43:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-08T17:43:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
btcsinc/unsloth-llama-3-8b-bnb-4bit-spider
|
btcsinc
| 2024-05-08T17:39:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-08T17:39:05Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** btcsinc
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
era28/gemma-it-detoxification
|
era28
| 2024-05-08T17:35:40Z | 187 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-08T17:32:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jialinselenasong/scibert-finetuned-ner
|
jialinselenasong
| 2024-05-08T17:35:23Z | 111 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:allenai/scibert_scivocab_uncased",
"base_model:finetune:allenai/scibert_scivocab_uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-05-08T17:33:22Z |
---
base_model: allenai/scibert_scivocab_uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: scibert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scibert-finetuned-ner
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4251
- Precision: 0.6295
- Recall: 0.6818
- F1: 0.6546
- Accuracy: 0.8532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 305 | 0.4221 | 0.6046 | 0.6641 | 0.6330 | 0.8486 |
| 0.4525 | 2.0 | 610 | 0.4192 | 0.6144 | 0.6755 | 0.6435 | 0.8498 |
| 0.4525 | 3.0 | 915 | 0.4251 | 0.6295 | 0.6818 | 0.6546 | 0.8532 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Nadahass/MREG-LORA4-7b-4batches-1107Steps-q8-lr2e-5
|
Nadahass
| 2024-05-08T17:34:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-08T17:34:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JohnDoe70/SQAA_Instruct
|
JohnDoe70
| 2024-05-08T17:29:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-08T16:27:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ugurcanvurgun/finetuned_Mistral_7B_v_0_2
|
ugurcanvurgun
| 2024-05-08T17:28:26Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-08T16:44:24Z |
---
license: apache-2.0
---
|
DUAL-GPO-2/zephyr-7b-gpo-v2-i0
|
DUAL-GPO-2
| 2024-05-08T17:22:52Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:DUAL-GPO/zephyr-7b-gpo-final-i0",
"base_model:adapter:DUAL-GPO/zephyr-7b-gpo-final-i0",
"license:mit",
"region:us"
] | null | 2024-05-08T03:22:49Z |
---
license: mit
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
base_model: DUAL-GPO/zephyr-7b-gpo-final-i0
model-index:
- name: zephyr-7b-gpo-v2-i0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-gpo-v2-i0
This model is a fine-tuned version of [DUAL-GPO/zephyr-7b-gpo-final-i0](https://huggingface.co/DUAL-GPO/zephyr-7b-gpo-final-i0) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- total_eval_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.14.6
- Tokenizers 0.15.2
|
maneln/fine-tuning-gpt2-test
|
maneln
| 2024-05-08T17:20:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-05-08T17:08:33Z |
---
library_name: transformers
pipeline_tag: question-answering
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wufuheng/OrpoLlama-3-8B
|
wufuheng
| 2024-05-08T17:20:30Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-08T17:08:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NCTuanAnh/vietnamese_license_plate_detection_DeRT
|
NCTuanAnh
| 2024-05-08T17:19:25Z | 53 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-05-06T17:06:00Z |
---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using ****:
- Repo: [More Information Needed]
- Docs: [More Information Needed]
|
justshim/ppo-LunarLander-v2-test
|
justshim
| 2024-05-08T17:11:14Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-05-08T17:10:50Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -159.12 +/- 39.55
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
miraiminds/evolutionary-mergingLLM-Hermes-Dans-zephyr
|
miraiminds
| 2024-05-08T17:10:19Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2212.04089",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-08T17:00:32Z |
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# final_merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using /home/evol_merge_storage/input_models/Mistral-7B-v0.1_8133861 as a base.
### Models Merged
The following models were included in the merge:
* /home/evol_merge_storage/input_models/zephyr-7b-beta_2449712360
* /home/evol_merge_storage/input_models/Hermes-2-Pro-Mistral-7B_2793206805
* /home/evol_merge_storage/input_models/Dans-AdventurousWinds-Mk2-7b_1152917843
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: /home/evol_merge_storage/input_models/Mistral-7B-v0.1_8133861
dtype: bfloat16
merge_method: task_arithmetic
parameters:
int8_mask: 1.0
normalize: 0.0
slices:
- sources:
- layer_range: [0, 8]
model: /home/evol_merge_storage/input_models/Hermes-2-Pro-Mistral-7B_2793206805
parameters:
weight: 0.45632162073777516
- layer_range: [0, 8]
model: /home/evol_merge_storage/input_models/Dans-AdventurousWinds-Mk2-7b_1152917843
parameters:
weight: 0.6468013824217197
- layer_range: [0, 8]
model: /home/evol_merge_storage/input_models/zephyr-7b-beta_2449712360
parameters:
weight: 0.04013732886365818
- layer_range: [0, 8]
model: /home/evol_merge_storage/input_models/Mistral-7B-v0.1_8133861
- sources:
- layer_range: [8, 16]
model: /home/evol_merge_storage/input_models/Hermes-2-Pro-Mistral-7B_2793206805
parameters:
weight: 0.39991499422381294
- layer_range: [8, 16]
model: /home/evol_merge_storage/input_models/Dans-AdventurousWinds-Mk2-7b_1152917843
parameters:
weight: 0.3033908462486953
- layer_range: [8, 16]
model: /home/evol_merge_storage/input_models/zephyr-7b-beta_2449712360
parameters:
weight: 0.3250623744211742
- layer_range: [8, 16]
model: /home/evol_merge_storage/input_models/Mistral-7B-v0.1_8133861
- sources:
- layer_range: [16, 24]
model: /home/evol_merge_storage/input_models/Hermes-2-Pro-Mistral-7B_2793206805
parameters:
weight: 0.6202885707451035
- layer_range: [16, 24]
model: /home/evol_merge_storage/input_models/Dans-AdventurousWinds-Mk2-7b_1152917843
parameters:
weight: 0.235874597687366
- layer_range: [16, 24]
model: /home/evol_merge_storage/input_models/zephyr-7b-beta_2449712360
parameters:
weight: 0.27733993810853536
- layer_range: [16, 24]
model: /home/evol_merge_storage/input_models/Mistral-7B-v0.1_8133861
- sources:
- layer_range: [24, 32]
model: /home/evol_merge_storage/input_models/Hermes-2-Pro-Mistral-7B_2793206805
parameters:
weight: 0.5484088078896818
- layer_range: [24, 32]
model: /home/evol_merge_storage/input_models/Dans-AdventurousWinds-Mk2-7b_1152917843
parameters:
weight: 0.876059424739865
- layer_range: [24, 32]
model: /home/evol_merge_storage/input_models/zephyr-7b-beta_2449712360
parameters:
weight: 0.26196309089548436
- layer_range: [24, 32]
model: /home/evol_merge_storage/input_models/Mistral-7B-v0.1_8133861
```
|
AbhishekSureddy/llama-7b-sft-alpaca-20k
|
AbhishekSureddy
| 2024-05-08T17:08:42Z | 79 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-08T16:48:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
terry69/mistral-1p-full
|
terry69
| 2024-05-08T17:08:16Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-08T17:06:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vsevolodl/prometheus-8x7b-v2.0-GGUF
|
vsevolodl
| 2024-05-08T17:05:57Z | 31 | 3 |
transformers
|
[
"transformers",
"gguf",
"text2text-generation",
"en",
"dataset:prometheus-eval/Feedback-Collection",
"dataset:prometheus-eval/Preference-Collection",
"arxiv:2405.01535",
"arxiv:2310.08491",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text2text-generation
| 2024-05-07T05:25:56Z |
---
datasets:
- prometheus-eval/Feedback-Collection
- prometheus-eval/Preference-Collection
library_name: transformers
pipeline_tag: text2text-generation
tags:
- text2text-generation
license: apache-2.0
language:
- en
---
# Links for Reference
- **Quants for:** https://huggingface.co/prometheus-eval/prometheus-8x7b-v2.0
- **Homepage: In Progress**
- **Repository:** https://github.com/prometheus-eval/prometheus-eval
- **Paper:** https://arxiv.org/abs/2405.01535
- **Point of Contact:** seungone@cmu.edu
# TL;DR
Prometheus 2 is an alternative of GPT-4 evaluation when doing fine-grained evaluation of an underlying LLM & a Reward model for Reinforcement Learning from Human Feedback (RLHF).

Prometheus 2 is a language model using [Mistral-Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) as a base model.
It is fine-tuned on 100K feedback within the [Feedback Collection](https://huggingface.co/datasets/prometheus-eval/Feedback-Collection) and 200K feedback within the [Preference Collection](https://huggingface.co/datasets/prometheus-eval/Preference-Collection).
It is also made by weight merging to support both absolute grading (direct assessment) and relative grading (pairwise ranking).
The surprising thing is that we find weight merging also improves performance on each format.
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Related Models:** [All Prometheus Checkpoints](https://huggingface.co/models?search=prometheus-eval/Prometheus)
- **Resources for more information:**
- [Research paper](https://arxiv.org/abs/2405.01535)
- [GitHub Repo](https://github.com/prometheus-eval/prometheus-eval)
Prometheus is trained with two different sizes (7B and 8x7B).
You could check the 7B sized LM on [this page](https://huggingface.co/prometheus-eval/prometheus-2-7b-v2.0).
Also, check out our dataset as well on [this page](https://huggingface.co/datasets/prometheus-eval/Feedback-Collection) and [this page](https://huggingface.co/datasets/prometheus-eval/Preference-Collection).
## Prompt Format
We have made wrapper functions and classes to conveniently use Prometheus 2 at [our github repository](https://github.com/prometheus-eval/prometheus-eval).
We highly recommend you use it!
However, if you just want to use the model for your use case, please refer to the prompt format below.
Note that absolute grading and relative grading requires different prompt templates and system prompts.
### Absolute Grading (Direct Assessment)
Prometheus requires 4 components in the input: An instruction, a response to evaluate, a score rubric, and a reference answer. You could refer to the prompt format below.
You should fill in the instruction, response, reference answer, criteria description, and score description for score in range of 1 to 5.
Fix the components with \{text\} inside.
```
###Task Description:
An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.
3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\"
4. Please do not generate any other opening, closing, and explanations.
###The instruction to evaluate:
{orig_instruction}
###Response to evaluate:
{orig_response}
###Reference Answer (Score 5):
{orig_reference_answer}
###Score Rubrics:
[{orig_criteria}]
Score 1: {orig_score1_description}
Score 2: {orig_score2_description}
Score 3: {orig_score3_description}
Score 4: {orig_score4_description}
Score 5: {orig_score5_description}
###Feedback:
```
After this, you should apply the conversation template of Mistral (not applying it might lead to unexpected behaviors).
You can find the conversation class at this [link](https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py).
```
conv = get_conv_template("mistral")
conv.set_system_message("You are a fair judge assistant tasked with providing clear, objective feedback based on specific criteria, ensuring each assessment reflects the absolute standards set for performance.")
conv.append_message(conv.roles[0], dialogs['instruction'])
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
x = tokenizer(prompt,truncation=False)
```
As a result, a feedback and score decision will be generated, divided by a separating phrase ```[RESULT]```
### Relative Grading (Pairwise Ranking)
Prometheus requires 4 components in the input: An instruction, 2 responses to evaluate, a score rubric, and a reference answer. You could refer to the prompt format below.
You should fill in the instruction, 2 responses, reference answer, and criteria description.
Fix the components with \{text\} inside.
```
###Task Description:
An instruction (might include an Input inside it), a response to evaluate, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of two responses strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, choose a better response between Response A and Response B. You should refer to the score rubric.
3. The output format should look as follows: "Feedback: (write a feedback for criteria) [RESULT] (A or B)"
4. Please do not generate any other opening, closing, and explanations.
###Instruction:
{orig_instruction}
###Response A:
{orig_response_A}
###Response B:
{orig_response_B}
###Reference Answer:
{orig_reference_answer}
###Score Rubric:
{orig_criteria}
###Feedback:
```
After this, you should apply the conversation template of Mistral (not applying it might lead to unexpected behaviors).
You can find the conversation class at this [link](https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py).
```
conv = get_conv_template("mistral")
conv.set_system_message("You are a fair judge assistant assigned to deliver insightful feedback that compares individual performances, highlighting how each stands relative to others within the same cohort.")
conv.append_message(conv.roles[0], dialogs['instruction'])
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
x = tokenizer(prompt,truncation=False)
```
As a result, a feedback and score decision will be generated, divided by a separating phrase ```[RESULT]```
## License
Feedback Collection, Preference Collection, and Prometheus 2 are subject to OpenAI's Terms of Use for the generated data. If you suspect any violations, please reach out to us.
# Citation
If you find the following model helpful, please consider citing our paper!
**BibTeX:**
```bibtex
@misc{kim2023prometheus,
title={Prometheus: Inducing Fine-grained Evaluation Capability in Language Models},
author={Seungone Kim and Jamin Shin and Yejin Cho and Joel Jang and Shayne Longpre and Hwaran Lee and Sangdoo Yun and Seongjin Shin and Sungdong Kim and James Thorne and Minjoon Seo},
year={2023},
eprint={2310.08491},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{kim2024prometheus,
title={Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models},
author={Seungone Kim and Juyoung Suk and Shayne Longpre and Bill Yuchen Lin and Jamin Shin and Sean Welleck and Graham Neubig and Moontae Lee and Kyungjae Lee and Minjoon Seo},
year={2024},
eprint={2405.01535},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
aixsatoshi/Ex-karakuri-8x12B-chat-v2
|
aixsatoshi
| 2024-05-08T17:05:11Z | 41 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-08T06:18:00Z |
---
license: apache-2.0
---
Meta-Llama-3-70bのセルフマージにより120Bにパラメーター数を拡大したモデルの高性能化が報告されています
今回高品質な日本語LLMである、[karakuri-ai/karakuri-lm-8x7b-chat-v0.1](https://huggingface.co/karakuri-ai/karakuri-lm-8x7b-chat-v0.1)の精度を更に高めるために、"num_hidden_layers": 32、から、56への自己拡張マージを行いました。
マージに利用したスライスのインターバルから本モデル(Ex-karakuri-8x12B-chat-v2)が非マージ部分4層、[Ex-karakuri-8x12B-chat-v1](https://huggingface.co/aixsatoshi/Ex-karakuri-8x12B-chat-v1)は8層に設定しています
It was inspired by large merges like:
- [Meta-Llama-3-120B-Instruct](https://huggingface.co/mlabonne/Meta-Llama-3-120B-Instruct/)
- [alpindale/goliath-120b](https://huggingface.co/alpindale/goliath-120b)
- [nsfwthrowitaway69/Venus-120b-v1.0](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0)
- [cognitivecomputations/MegaDolphin-120b](https://huggingface.co/cognitivecomputations/MegaDolphin-120b)
- [wolfram/miquliz-120b-v2.0](https://huggingface.co/wolfram/miquliz-120b-v2.0).
```
slices:
- sources:
- layer_range: [0, 4]
model: karakuri-ai/karakuri-lm-8x7b-chat-v0.1
- sources:
- layer_range: [2, 6]
model: karakuri-ai/karakuri-lm-8x7b-chat-v0.1
- sources:
- layer_range: [4, 8]
model: karakuri-ai/karakuri-lm-8x7b-chat-v0.1
- sources:
- layer_range: [6, 10]
model: karakuri-ai/karakuri-lm-8x7b-chat-v0.1
- sources:
- layer_range: [8, 12]
model: karakuri-ai/karakuri-lm-8x7b-chat-v0.1
- sources:
- layer_range: [10, 14]
model: karakuri-ai/karakuri-lm-8x7b-chat-v0.1
- sources:
- layer_range: [12, 16]
model: karakuri-ai/karakuri-lm-8x7b-chat-v0.1
- sources:
- layer_range: [14, 18]
model: karakuri-ai/karakuri-lm-8x7b-chat-v0.1
- sources:
- layer_range: [16, 20]
model: karakuri-ai/karakuri-lm-8x7b-chat-v0.1
- sources:
- layer_range: [18, 22]
model: karakuri-ai/karakuri-lm-8x7b-chat-v0.1
- sources:
- layer_range: [20, 24]
model: karakuri-ai/karakuri-lm-8x7b-chat-v0.1
- sources:
- layer_range: [22, 26]
model: karakuri-ai/karakuri-lm-8x7b-chat-v0.1
- sources:
- layer_range: [24, 28]
model: karakuri-ai/karakuri-lm-8x7b-chat-v0.1
- sources:
- layer_range: [26, 30]
model: karakuri-ai/karakuri-lm-8x7b-chat-v0.1
- sources:
- layer_range: [28, 32]
model: karakuri-ai/karakuri-lm-8x7b-chat-v0.1
merge_method: passthrough
dtype: bfloat16
```
|
NBA55/Final_Experiment_with_trained_model_Final_DPO_for_all_3_epoch_2_lr_5e_4
|
NBA55
| 2024-05-08T17:04:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"trl",
"dpo",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-08T17:04:42Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kid502/sd-class-butterflies-32
|
kid502
| 2024-05-08T17:01:56Z | 44 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-05-08T17:01:14Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('kid502/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
StrangeSX/SeaLLM-7B-v2.5-ORPO-SARAA-M.GGUF
|
StrangeSX
| 2024-05-08T16:52:27Z | 9 | 0 |
transformers
|
[
"transformers",
"gguf",
"gemma",
"text-generation-inference",
"unsloth",
"en",
"base_model:SeaLLMs/SeaLLM-7B-v2.5",
"base_model:quantized:SeaLLMs/SeaLLM-7B-v2.5",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-08T16:49:46Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- gguf
base_model: SeaLLMs/SeaLLM-7B-v2.5
---
# Uploaded model
- **Developed by:** StrangeSX
- **License:** apache-2.0
- **Finetuned from model :** SeaLLMs/SeaLLM-7B-v2.5
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xyh110sym/Llama3-262K-8B
|
xyh110sym
| 2024-05-08T16:49:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mergekit",
"peft",
"endpoints_compatible",
"region:us"
] | null | 2024-05-08T15:56:24Z |
---
base_model: []
library_name: transformers
tags:
- mergekit
- peft
---
# Untitled LoRA Model (1)
This is a LoRA extracted from a language model. It was extracted using [mergekit](https://github.com/arcee-ai/mergekit).
## LoRA Details
This LoRA adapter was extracted from /home/xiaoyuanhang/llamalora/Meta-Llama-3-8B-Instruct and uses /home/xiaoyuanhang/llamalora/Llama-3-8B-Instruct-262k as a base.
### Parameters
The following command was used to extract this LoRA adapter:
```sh
mergekit-extract-lora /home/xiaoyuanhang/llamalora/Llama-3-8B-Instruct-262k /home/xiaoyuanhang/llamalora/Meta-Llama-3-8B-Instruct OUTPUT_PATH --rank=32
```
|
mp1704/gpt-neo-pt
|
mp1704
| 2024-05-08T16:48:41Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-08T16:47:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jialinselenasong/bert-finetuned-ner
|
jialinselenasong
| 2024-05-08T16:46:21Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-05-08T15:05:19Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4523
- Precision: 0.5873
- Recall: 0.6528
- F1: 0.6183
- Accuracy: 0.8526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 305 | 0.4440 | 0.5662 | 0.6348 | 0.5985 | 0.8407 |
| 0.4907 | 2.0 | 610 | 0.4375 | 0.5692 | 0.6554 | 0.6093 | 0.8463 |
| 0.4907 | 3.0 | 915 | 0.4523 | 0.5873 | 0.6528 | 0.6183 | 0.8526 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
StrangeSX/SeaLLM-7B-v2.5-ORPO-SARAA-S-GGUF
|
StrangeSX
| 2024-05-08T16:45:44Z | 10 | 0 |
transformers
|
[
"transformers",
"gguf",
"gemma",
"text-generation-inference",
"unsloth",
"en",
"base_model:SeaLLMs/SeaLLM-7B-v2.5",
"base_model:quantized:SeaLLMs/SeaLLM-7B-v2.5",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-08T16:42:55Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- gguf
base_model: SeaLLMs/SeaLLM-7B-v2.5
---
# Uploaded model
- **Developed by:** StrangeSX
- **License:** apache-2.0
- **Finetuned from model :** SeaLLMs/SeaLLM-7B-v2.5
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
alexm-nm/tinyllama-24-gptq-8bit-channelwise
|
alexm-nm
| 2024-05-08T16:44:49Z | 136 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-05-08T16:36:36Z |
---
license: apache-2.0
---
|
DUAL-GPO/phi-2-gpo-renew2-b0.001-vllm-merge-20k-complete-i1
|
DUAL-GPO
| 2024-05-08T16:43:05Z | 8 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"phi",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"custom_code",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:DUAL-GPO/phi-2-gpo-final-renew2-i0",
"base_model:adapter:DUAL-GPO/phi-2-gpo-final-renew2-i0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-08T13:21:30Z |
---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
base_model: DUAL-GPO/phi-2-gpo-final-renew2-i0
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: phi-2-gpo-renew2-b0.001-vllm-merge-20k-complete-i1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-gpo-renew2-b0.001-vllm-merge-20k-complete-i1
This model is a fine-tuned version of [DUAL-GPO/phi-2-gpo-final-renew2-i0](https://huggingface.co/DUAL-GPO/phi-2-gpo-final-renew2-i0) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
|
ThatBlondeGuy/paper-250-epochs
|
ThatBlondeGuy
| 2024-05-08T16:41:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-03-08T13:20:52Z |

a paper voice model from the new rock paper scissors show on nickeloden
trained to 250 epochs, there is also a 200 epochs model!
https://rock-paper-scissors.fandom.com/wiki/Rock,_Paper,_Scissors
and here is a couple episodes
https://youtu.be/tkUbE5Mz2m4?feature=shared
please tag me if you use the model
and if the model needs an improvement/ update contact me! at
rayplayz2666@gmail.com
here is a link to the sample audio: https://huggingface.co/rayplayz/paper-200-epochs/resolve/main/test.wav?download=true
have a nice day! :)
another sample audio link:https://huggingface.co/rayplayz/paper-200-epochs/resolve/main/test2.wav?download=true
the final audio file test: https://huggingface.co/rayplayz/paper-200-epochs/resolve/main/test3.wav?download=true

|
hasnatz/exp1
|
hasnatz
| 2024-05-08T16:41:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-08T16:41:01Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** hasnatz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Richie1129/llama-3-finetune
|
Richie1129
| 2024-05-08T16:37:22Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-07T13:38:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alexm-nm/tinyllama-24-gptq-8bit-g128
|
alexm-nm
| 2024-05-08T16:35:58Z | 3,937 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-05-08T16:30:41Z |
---
license: apache-2.0
---
|
adityapwr/ppo-LunarLander-v2
|
adityapwr
| 2024-05-08T16:34:20Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-05-08T16:34:00Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 210.07 +/- 85.08
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
StrangeSX/SeaLLM-7B-v2.5-ORPO-SARAA-M-GGUF
|
StrangeSX
| 2024-05-08T16:31:53Z | 14 | 0 |
transformers
|
[
"transformers",
"gguf",
"gemma",
"text-generation-inference",
"unsloth",
"en",
"base_model:SeaLLMs/SeaLLM-7B-v2.5",
"base_model:quantized:SeaLLMs/SeaLLM-7B-v2.5",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-08T16:05:30Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- gguf
base_model: SeaLLMs/SeaLLM-7B-v2.5
---
# Uploaded model
- **Developed by:** StrangeSX
- **License:** apache-2.0
- **Finetuned from model :** SeaLLMs/SeaLLM-7B-v2.5
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ebayes/amazonas-fern-latest
|
ebayes
| 2024-05-08T16:12:28Z | 195 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-05-08T16:12:00Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: ebayes/amazonas-fern-latest
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7968992248062016
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ebayes/amazonas-fern-latest
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2619
- Accuracy: 0.7969
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 4.9425 | 1.0 | 516 | 4.6442 | 0.2450 |
| 4.4394 | 2.0 | 1032 | 4.1936 | 0.3271 |
| 4.0242 | 3.0 | 1548 | 3.8147 | 0.3891 |
| 3.6368 | 4.0 | 2064 | 3.4881 | 0.4403 |
| 3.3168 | 5.0 | 2580 | 3.1849 | 0.4760 |
| 2.9583 | 6.0 | 3096 | 2.9087 | 0.5054 |
| 2.6652 | 7.0 | 3612 | 2.6435 | 0.5271 |
| 2.3696 | 8.0 | 4128 | 2.4352 | 0.5442 |
| 2.1322 | 9.0 | 4644 | 2.2335 | 0.5814 |
| 1.8776 | 10.0 | 5160 | 2.0674 | 0.5922 |
| 1.6773 | 11.0 | 5676 | 1.9474 | 0.6093 |
| 1.5136 | 12.0 | 6192 | 1.8081 | 0.6264 |
| 1.3341 | 13.0 | 6708 | 1.6931 | 0.6419 |
| 1.2215 | 14.0 | 7224 | 1.5986 | 0.6481 |
| 1.0886 | 15.0 | 7740 | 1.5309 | 0.6744 |
| 0.9762 | 16.0 | 8256 | 1.4605 | 0.6760 |
| 0.8322 | 17.0 | 8772 | 1.4038 | 0.6946 |
| 0.7767 | 18.0 | 9288 | 1.3404 | 0.6961 |
| 0.6943 | 19.0 | 9804 | 1.3143 | 0.7085 |
| 0.6011 | 20.0 | 10320 | 1.2708 | 0.7256 |
| 0.5585 | 21.0 | 10836 | 1.2777 | 0.7101 |
| 0.5014 | 22.0 | 11352 | 1.2744 | 0.7147 |
| 0.4704 | 23.0 | 11868 | 1.1907 | 0.7302 |
| 0.3934 | 24.0 | 12384 | 1.1748 | 0.7442 |
| 0.3616 | 25.0 | 12900 | 1.1897 | 0.7364 |
| 0.3274 | 26.0 | 13416 | 1.1648 | 0.7426 |
| 0.3062 | 27.0 | 13932 | 1.1899 | 0.7333 |
| 0.2726 | 28.0 | 14448 | 1.1192 | 0.7488 |
| 0.2425 | 29.0 | 14964 | 1.0887 | 0.7643 |
| 0.2316 | 30.0 | 15480 | 1.0957 | 0.7674 |
| 0.2321 | 31.0 | 15996 | 1.1206 | 0.7504 |
| 0.1828 | 32.0 | 16512 | 1.1901 | 0.7426 |
| 0.1675 | 33.0 | 17028 | 1.1317 | 0.7566 |
| 0.1572 | 34.0 | 17544 | 1.1530 | 0.7380 |
| 0.1453 | 35.0 | 18060 | 1.1519 | 0.7550 |
| 0.1385 | 36.0 | 18576 | 1.1358 | 0.7690 |
| 0.138 | 37.0 | 19092 | 1.1481 | 0.7628 |
| 0.1244 | 38.0 | 19608 | 1.1959 | 0.7442 |
| 0.1376 | 39.0 | 20124 | 1.1581 | 0.7659 |
| 0.107 | 40.0 | 20640 | 1.1979 | 0.7628 |
| 0.1219 | 41.0 | 21156 | 1.1915 | 0.7566 |
| 0.1105 | 42.0 | 21672 | 1.2247 | 0.7550 |
| 0.127 | 43.0 | 22188 | 1.1439 | 0.7736 |
| 0.1022 | 44.0 | 22704 | 1.1729 | 0.7535 |
| 0.1158 | 45.0 | 23220 | 1.2010 | 0.7535 |
| 0.1045 | 46.0 | 23736 | 1.2051 | 0.7519 |
| 0.103 | 47.0 | 24252 | 1.2006 | 0.7643 |
| 0.0967 | 48.0 | 24768 | 1.1888 | 0.7581 |
| 0.0963 | 49.0 | 25284 | 1.1814 | 0.7690 |
| 0.0923 | 50.0 | 25800 | 1.1566 | 0.7705 |
| 0.1071 | 51.0 | 26316 | 1.2239 | 0.7566 |
| 0.081 | 52.0 | 26832 | 1.2263 | 0.7581 |
| 0.0922 | 53.0 | 27348 | 1.1442 | 0.7628 |
| 0.0787 | 54.0 | 27864 | 1.2122 | 0.7705 |
| 0.0952 | 55.0 | 28380 | 1.3165 | 0.7504 |
| 0.1057 | 56.0 | 28896 | 1.2726 | 0.7550 |
| 0.1123 | 57.0 | 29412 | 1.2554 | 0.7597 |
| 0.0703 | 58.0 | 29928 | 1.1242 | 0.7752 |
| 0.094 | 59.0 | 30444 | 1.1734 | 0.7767 |
| 0.0699 | 60.0 | 30960 | 1.2493 | 0.7550 |
| 0.0731 | 61.0 | 31476 | 1.2414 | 0.7643 |
| 0.0888 | 62.0 | 31992 | 1.3430 | 0.7473 |
| 0.0737 | 63.0 | 32508 | 1.3174 | 0.7566 |
| 0.0825 | 64.0 | 33024 | 1.3129 | 0.7597 |
| 0.0821 | 65.0 | 33540 | 1.2509 | 0.7736 |
| 0.0817 | 66.0 | 34056 | 1.2020 | 0.7736 |
| 0.0754 | 67.0 | 34572 | 1.2447 | 0.7721 |
| 0.0854 | 68.0 | 35088 | 1.2626 | 0.7767 |
| 0.0755 | 69.0 | 35604 | 1.2202 | 0.7814 |
| 0.0847 | 70.0 | 36120 | 1.2525 | 0.7612 |
| 0.068 | 71.0 | 36636 | 1.2940 | 0.7674 |
| 0.0648 | 72.0 | 37152 | 1.2585 | 0.7736 |
| 0.0768 | 73.0 | 37668 | 1.2878 | 0.7597 |
| 0.0771 | 74.0 | 38184 | 1.2685 | 0.7659 |
| 0.0749 | 75.0 | 38700 | 1.2860 | 0.7721 |
| 0.0615 | 76.0 | 39216 | 1.3085 | 0.7643 |
| 0.0677 | 77.0 | 39732 | 1.3011 | 0.7674 |
| 0.0673 | 78.0 | 40248 | 1.2077 | 0.7814 |
| 0.0696 | 79.0 | 40764 | 1.2118 | 0.7860 |
| 0.0714 | 80.0 | 41280 | 1.1952 | 0.7767 |
| 0.0624 | 81.0 | 41796 | 1.2575 | 0.7690 |
| 0.0604 | 82.0 | 42312 | 1.2816 | 0.7736 |
| 0.0641 | 83.0 | 42828 | 1.3230 | 0.7643 |
| 0.0574 | 84.0 | 43344 | 1.2876 | 0.7752 |
| 0.0621 | 85.0 | 43860 | 1.2576 | 0.7845 |
| 0.0639 | 86.0 | 44376 | 1.2486 | 0.7705 |
| 0.0538 | 87.0 | 44892 | 1.2192 | 0.7845 |
| 0.0518 | 88.0 | 45408 | 1.2171 | 0.7674 |
| 0.0563 | 89.0 | 45924 | 1.3201 | 0.7581 |
| 0.0531 | 90.0 | 46440 | 1.2414 | 0.7736 |
| 0.0431 | 91.0 | 46956 | 1.3059 | 0.7736 |
| 0.0655 | 92.0 | 47472 | 1.3307 | 0.7566 |
| 0.0595 | 93.0 | 47988 | 1.2927 | 0.7659 |
| 0.0707 | 94.0 | 48504 | 1.2667 | 0.7628 |
| 0.0517 | 95.0 | 49020 | 1.2957 | 0.7597 |
| 0.0579 | 96.0 | 49536 | 1.3340 | 0.7643 |
| 0.0492 | 97.0 | 50052 | 1.3588 | 0.7535 |
| 0.0472 | 98.0 | 50568 | 1.3074 | 0.7612 |
| 0.0542 | 99.0 | 51084 | 1.2657 | 0.7705 |
| 0.0689 | 100.0 | 51600 | 1.2943 | 0.7752 |
| 0.0464 | 101.0 | 52116 | 1.2386 | 0.7953 |
| 0.0589 | 102.0 | 52632 | 1.2717 | 0.7767 |
| 0.0488 | 103.0 | 53148 | 1.2678 | 0.7814 |
| 0.0554 | 104.0 | 53664 | 1.2711 | 0.7783 |
| 0.0502 | 105.0 | 54180 | 1.2746 | 0.7721 |
| 0.0383 | 106.0 | 54696 | 1.3002 | 0.7798 |
| 0.0531 | 107.0 | 55212 | 1.2636 | 0.7891 |
| 0.0379 | 108.0 | 55728 | 1.3156 | 0.7721 |
| 0.042 | 109.0 | 56244 | 1.3668 | 0.7674 |
| 0.0543 | 110.0 | 56760 | 1.2883 | 0.7783 |
| 0.0522 | 111.0 | 57276 | 1.2913 | 0.7783 |
| 0.0469 | 112.0 | 57792 | 1.2847 | 0.7767 |
| 0.0598 | 113.0 | 58308 | 1.2642 | 0.7876 |
| 0.0472 | 114.0 | 58824 | 1.3264 | 0.7752 |
| 0.0405 | 115.0 | 59340 | 1.2648 | 0.7891 |
| 0.0434 | 116.0 | 59856 | 1.3059 | 0.7798 |
| 0.0481 | 117.0 | 60372 | 1.3373 | 0.7736 |
| 0.0454 | 118.0 | 60888 | 1.3237 | 0.7736 |
| 0.0504 | 119.0 | 61404 | 1.2956 | 0.7736 |
| 0.0495 | 120.0 | 61920 | 1.3504 | 0.7705 |
| 0.0424 | 121.0 | 62436 | 1.2852 | 0.7891 |
| 0.0493 | 122.0 | 62952 | 1.2621 | 0.7891 |
| 0.0421 | 123.0 | 63468 | 1.2755 | 0.7752 |
| 0.0339 | 124.0 | 63984 | 1.2914 | 0.7891 |
| 0.0415 | 125.0 | 64500 | 1.2959 | 0.7876 |
| 0.035 | 126.0 | 65016 | 1.2724 | 0.7891 |
| 0.0342 | 127.0 | 65532 | 1.2564 | 0.7798 |
| 0.0411 | 128.0 | 66048 | 1.2493 | 0.7798 |
| 0.0345 | 129.0 | 66564 | 1.2490 | 0.7891 |
| 0.0365 | 130.0 | 67080 | 1.2560 | 0.7969 |
| 0.0304 | 131.0 | 67596 | 1.2466 | 0.7876 |
| 0.0361 | 132.0 | 68112 | 1.2691 | 0.7953 |
| 0.0387 | 133.0 | 68628 | 1.2849 | 0.7860 |
| 0.0361 | 134.0 | 69144 | 1.2731 | 0.7891 |
| 0.0334 | 135.0 | 69660 | 1.2649 | 0.7907 |
| 0.0368 | 136.0 | 70176 | 1.2562 | 0.7953 |
| 0.0395 | 137.0 | 70692 | 1.2851 | 0.7891 |
| 0.0397 | 138.0 | 71208 | 1.2767 | 0.7891 |
| 0.0433 | 139.0 | 71724 | 1.2383 | 0.8031 |
| 0.031 | 140.0 | 72240 | 1.2429 | 0.7984 |
| 0.0326 | 141.0 | 72756 | 1.2389 | 0.8047 |
| 0.0369 | 142.0 | 73272 | 1.2475 | 0.8 |
| 0.0436 | 143.0 | 73788 | 1.2762 | 0.7907 |
| 0.031 | 144.0 | 74304 | 1.2772 | 0.7891 |
| 0.0278 | 145.0 | 74820 | 1.2513 | 0.7984 |
| 0.0345 | 146.0 | 75336 | 1.2639 | 0.7969 |
| 0.034 | 147.0 | 75852 | 1.2679 | 0.7953 |
| 0.0331 | 148.0 | 76368 | 1.2682 | 0.7938 |
| 0.028 | 149.0 | 76884 | 1.2634 | 0.7953 |
| 0.0356 | 150.0 | 77400 | 1.2619 | 0.7969 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
OwOpeepeepoopoo/llama3
|
OwOpeepeepoopoo
| 2024-05-08T16:10:36Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-08T16:07:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tuquyennnn/flant5-base-finetuning-v1
|
tuquyennnn
| 2024-05-08T16:10:21Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-08T16:09:49Z |
---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: flant5-base-finetuning-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flant5-base-finetuning-v1
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3670671.36 | 0.32 | 250 | nan |
| 0.0 | 0.64 | 500 | nan |
| 0.0 | 0.96 | 750 | nan |
| 0.0 | 1.28 | 1000 | nan |
| 0.0 | 1.61 | 1250 | nan |
| 0.0 | 1.93 | 1500 | nan |
| 0.0 | 2.25 | 1750 | nan |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.19.1
- Tokenizers 0.15.2
|
Mag0g/Ezekiel25_19
|
Mag0g
| 2024-05-08T16:09:42Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-08T16:07:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NBA55/Final_Experiment_with_trained_model_Final_DPO_for_all_3_epoch_1
|
NBA55
| 2024-05-08T16:07:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"trl",
"dpo",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-08T16:07:25Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Roner1/JAIS-Mental-empty-space-layers
|
Roner1
| 2024-05-08T16:06:36Z | 4 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:inceptionai/jais-13b-chat",
"base_model:adapter:inceptionai/jais-13b-chat",
"license:apache-2.0",
"region:us"
] | null | 2024-05-08T12:48:23Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: inception-mbzuai/jais-13b-chat
model-index:
- name: JAIS-Mental-empty-space-layers
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# JAIS-Mental-empty-space-layers
This model is a fine-tuned version of [inception-mbzuai/jais-13b-chat](https://huggingface.co/inception-mbzuai/jais-13b-chat) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
jonyoon/dummy-model
|
jonyoon
| 2024-05-08T16:03:20Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-05-08T16:02:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Aryaduta/modellora2
|
Aryaduta
| 2024-05-08T16:01:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-08T16:00:56Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-2b-bnb-4bit
---
# Uploaded model
- **Developed by:** Aryaduta
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kenhktsui/nano-phi-192M-v0.1
|
kenhktsui
| 2024-05-08T15:59:48Z | 130 | 1 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"conversational",
"en",
"dataset:kenhktsui/minipile_quality_score_v1",
"dataset:kenhktsui/simple_wikipedia_LM_quality_score_v1",
"dataset:kenhktsui/refinedweb-3m_quality_score_v1",
"dataset:kenhktsui/TM-DATA_quality_score_v1",
"dataset:kenhktsui/openwebtext_quality_score_v1",
"dataset:HuggingFaceTB/cosmopedia",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-08T15:28:28Z |
---
language:
- en
license: mit
library_name: transformers
inference:
parameters:
max_new_tokens: 64
do_sample: true
temperature: 0.1
repetition_penalty: 10
no_repeat_ngram_size: 4
eta_cutoff: 0.0006
renormalize_logits: true
widget:
- text: My name is El Microondas the Wise, and
example_title: El Microondas
- text: Kennesaw State University is a public
example_title: Kennesaw State University
- text: >-
Bungie Studios is an American video game developer. They are most famous for
developing the award winning Halo series of video games. They also made
Destiny. The studio was founded
example_title: Bungie
- text: The Mona Lisa is a world-renowned painting created by
example_title: Mona Lisa
- text: >-
The Harry Potter series, written by J.K. Rowling, begins with the book
titled
example_title: Harry Potter Series
- text: >-
Question: I have cities, but no houses. I have mountains, but no trees. I
have water, but no fish. What am I?
Answer:
example_title: Riddle
- text: The process of photosynthesis involves the conversion of
example_title: Photosynthesis
- text: >-
Jane went to the store to buy some groceries. She picked up apples, oranges,
and a loaf of bread. When she got home, she realized she forgot
example_title: Story Continuation
- text: >-
Problem 2: If a train leaves Station A at 9:00 AM and travels at 60 mph, and
another train leaves Station B at 10:00 AM and travels at 80 mph, when will
they meet if the distance between the stations is 300 miles?
To determine
example_title: Math Problem
- text: In the context of computer programming, an algorithm is
example_title: Algorithm Definition
pipeline_tag: text-generation
model-index:
- name: nano-phi-115M-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 24.15
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kenhktsui/nano-phi-115M-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 29.99
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kenhktsui/nano-phi-115M-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.46
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kenhktsui/nano-phi-115M-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 44.3
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kenhktsui/nano-phi-115M-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 51.45
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kenhktsui/nano-phi-115M-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kenhktsui/nano-phi-115M-v0.1
name: Open LLM Leaderboard
datasets:
- kenhktsui/minipile_quality_score_v1
- kenhktsui/simple_wikipedia_LM_quality_score_v1
- kenhktsui/refinedweb-3m_quality_score_v1
- kenhktsui/TM-DATA_quality_score_v1
- kenhktsui/openwebtext_quality_score_v1
- HuggingFaceTB/cosmopedia
---
# Model Card for nano-phi-192M-v0.1
This is a continual effort from [kenhktsui/nano-phi-115M-v0.1](https://huggingface.co/kenhktsui/nano-phi-115M-v0.1).
The model is not aligned.
Major differences:
- bigger tokenizer's vocab size
- addition of [HuggingFaceTB/cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia) as training dataset
- training token: 19B vs 7B
## How to use
To use the model, you will need transformer version >= 4.37.2
```
pip install transformers>=4.37.2
```
```
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="kenhktsui/nano-phi-192M-v0.1")
pipe("I am a machine learning researcher. I work on", max_new_tokens=50, repetition_penalty=10.0)
```
## Some metrics
- model
- hidden_size: 768
- num_key_value_heads: 8 (grouped query attention)
- num_attention_heads: 24
- num_hidden_layers: 6
- context length: 1024
- total params: 192M
- training:
- global steps: 36,000
## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric |kenhktsui/nano-phi-191M-v0.1 |[kenhktsui/nano-phi-115M-v0.1](https://huggingface.co/kenhktsui/nano-phi-115M-v0.1)|[microsoft/phi-2](https://huggingface.co/microsoft/phi-2) (Reproduced)|
|-----------------------|---------------------------|---------------------------|---------------------------|
| Avg. |29.24 | 28.68 |61.53 |
| ARC (25-shot) |24.15 | 21.93 |61.52 |
| HellaSwag (10-shot) | 29.99 | 27.87 |75.13 |
| MMLU (5-shot) |25.46 | 25.30 |58.23 |
| TruthfulQA (0-shot) |44.30 | 46.01 |44.46 |
| Winogrande (5-shot) |51.54 | 50.99 |74.51 |
| GSM8K (5-shot) |0.0 | 0.0 |55.34 |
Details:
hf-causal-experimental (pretrained=/content/lm-evaluation-harness/artifacts/model-9gh18vfl:v25,use_accelerate=false,trust_remote_code=True), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
| Task |Version| Metric |Value | |Stderr|
|--------|------:|--------|-----:|---|-----:|
|arc_easy| 0|acc |0.4596|± |0.0102|
| | |acc_norm|0.4070|± |0.0101|
hf-causal-experimental (pretrained=/content/lm-evaluation-harness/artifacts/model-9gh18vfl:v25,use_accelerate=false,trust_remote_code=True), limit: None, provide_description: False, num_fewshot: 25, batch_size: 8
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.1911|± |0.0115|
| | |acc_norm|0.2415|± |0.0125|
hf-causal-experimental (pretrained=/content/lm-evaluation-harness/artifacts/model-9gh18vfl:v25,use_accelerate=false,trust_remote_code=True), limit: None, provide_description: False, num_fewshot: 10, batch_size: 8
| Task |Version| Metric |Value | |Stderr|
|---------|------:|--------|-----:|---|-----:|
|hellaswag| 0|acc |0.2833|± |0.0045|
| | |acc_norm|0.2999|± |0.0046|
hf-causal-experimental (pretrained=/content/lm-evaluation-harness/artifacts/model-9gh18vfl:v25,use_accelerate=false,trust_remote_code=True), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.2583|± |0.0153|
| | |mc2 |0.4430|± |0.0152|
hf-causal-experimental (pretrained=/content/lm-evaluation-harness/artifacts/model-9gh18vfl:v25,use_accelerate=false,trust_remote_code=True), limit: None, provide_description: False, num_fewshot: 5, batch_size: 8
| Task |Version| Metric |Value | |Stderr|
|-------------------------------------------------|------:|--------|-----:|---|-----:|
|hendrycksTest-abstract_algebra | 1|acc |0.2200|± |0.0416|
| | |acc_norm|0.2200|± |0.0416|
|hendrycksTest-anatomy | 1|acc |0.2593|± |0.0379|
| | |acc_norm|0.2593|± |0.0379|
|hendrycksTest-astronomy | 1|acc |0.1711|± |0.0306|
| | |acc_norm|0.1711|± |0.0306|
|hendrycksTest-business_ethics | 1|acc |0.2400|± |0.0429|
| | |acc_norm|0.2400|± |0.0429|
|hendrycksTest-clinical_knowledge | 1|acc |0.2566|± |0.0269|
| | |acc_norm|0.2566|± |0.0269|
|hendrycksTest-college_biology | 1|acc |0.2639|± |0.0369|
| | |acc_norm|0.2639|± |0.0369|
|hendrycksTest-college_chemistry | 1|acc |0.1800|± |0.0386|
| | |acc_norm|0.1800|± |0.0386|
|hendrycksTest-college_computer_science | 1|acc |0.3300|± |0.0473|
| | |acc_norm|0.3300|± |0.0473|
|hendrycksTest-college_mathematics | 1|acc |0.3000|± |0.0461|
| | |acc_norm|0.3000|± |0.0461|
|hendrycksTest-college_medicine | 1|acc |0.2023|± |0.0306|
| | |acc_norm|0.2023|± |0.0306|
|hendrycksTest-college_physics | 1|acc |0.2843|± |0.0449|
| | |acc_norm|0.2843|± |0.0449|
|hendrycksTest-computer_security | 1|acc |0.2200|± |0.0416|
| | |acc_norm|0.2200|± |0.0416|
|hendrycksTest-conceptual_physics | 1|acc |0.2511|± |0.0283|
| | |acc_norm|0.2511|± |0.0283|
|hendrycksTest-econometrics | 1|acc |0.2807|± |0.0423|
| | |acc_norm|0.2807|± |0.0423|
|hendrycksTest-electrical_engineering | 1|acc |0.2897|± |0.0378|
| | |acc_norm|0.2897|± |0.0378|
|hendrycksTest-elementary_mathematics | 1|acc |0.2804|± |0.0231|
| | |acc_norm|0.2804|± |0.0231|
|hendrycksTest-formal_logic | 1|acc |0.2143|± |0.0367|
| | |acc_norm|0.2143|± |0.0367|
|hendrycksTest-global_facts | 1|acc |0.1700|± |0.0378|
| | |acc_norm|0.1700|± |0.0378|
|hendrycksTest-high_school_biology | 1|acc |0.3226|± |0.0266|
| | |acc_norm|0.3226|± |0.0266|
|hendrycksTest-high_school_chemistry | 1|acc |0.2759|± |0.0314|
| | |acc_norm|0.2759|± |0.0314|
|hendrycksTest-high_school_computer_science | 1|acc |0.2700|± |0.0446|
| | |acc_norm|0.2700|± |0.0446|
|hendrycksTest-high_school_european_history | 1|acc |0.2606|± |0.0343|
| | |acc_norm|0.2606|± |0.0343|
|hendrycksTest-high_school_geography | 1|acc |0.3081|± |0.0329|
| | |acc_norm|0.3081|± |0.0329|
|hendrycksTest-high_school_government_and_politics| 1|acc |0.3627|± |0.0347|
| | |acc_norm|0.3627|± |0.0347|
|hendrycksTest-high_school_macroeconomics | 1|acc |0.2641|± |0.0224|
| | |acc_norm|0.2641|± |0.0224|
|hendrycksTest-high_school_mathematics | 1|acc |0.2630|± |0.0268|
| | |acc_norm|0.2630|± |0.0268|
|hendrycksTest-high_school_microeconomics | 1|acc |0.3403|± |0.0308|
| | |acc_norm|0.3403|± |0.0308|
|hendrycksTest-high_school_physics | 1|acc |0.3113|± |0.0378|
| | |acc_norm|0.3113|± |0.0378|
|hendrycksTest-high_school_psychology | 1|acc |0.2716|± |0.0191|
| | |acc_norm|0.2716|± |0.0191|
|hendrycksTest-high_school_statistics | 1|acc |0.4491|± |0.0339|
| | |acc_norm|0.4491|± |0.0339|
|hendrycksTest-high_school_us_history | 1|acc |0.2402|± |0.0300|
| | |acc_norm|0.2402|± |0.0300|
|hendrycksTest-high_school_world_history | 1|acc |0.2363|± |0.0277|
| | |acc_norm|0.2363|± |0.0277|
|hendrycksTest-human_aging | 1|acc |0.2197|± |0.0278|
| | |acc_norm|0.2197|± |0.0278|
|hendrycksTest-human_sexuality | 1|acc |0.2824|± |0.0395|
| | |acc_norm|0.2824|± |0.0395|
|hendrycksTest-international_law | 1|acc |0.2479|± |0.0394|
| | |acc_norm|0.2479|± |0.0394|
|hendrycksTest-jurisprudence | 1|acc |0.2037|± |0.0389|
| | |acc_norm|0.2037|± |0.0389|
|hendrycksTest-logical_fallacies | 1|acc |0.2393|± |0.0335|
| | |acc_norm|0.2393|± |0.0335|
|hendrycksTest-machine_learning | 1|acc |0.1875|± |0.0370|
| | |acc_norm|0.1875|± |0.0370|
|hendrycksTest-management | 1|acc |0.2039|± |0.0399|
| | |acc_norm|0.2039|± |0.0399|
|hendrycksTest-marketing | 1|acc |0.1795|± |0.0251|
| | |acc_norm|0.1795|± |0.0251|
|hendrycksTest-medical_genetics | 1|acc |0.3000|± |0.0461|
| | |acc_norm|0.3000|± |0.0461|
|hendrycksTest-miscellaneous | 1|acc |0.2644|± |0.0158|
| | |acc_norm|0.2644|± |0.0158|
|hendrycksTest-moral_disputes | 1|acc |0.2225|± |0.0224|
| | |acc_norm|0.2225|± |0.0224|
|hendrycksTest-moral_scenarios | 1|acc |0.2726|± |0.0149|
| | |acc_norm|0.2726|± |0.0149|
|hendrycksTest-nutrition | 1|acc |0.2353|± |0.0243|
| | |acc_norm|0.2353|± |0.0243|
|hendrycksTest-philosophy | 1|acc |0.2283|± |0.0238|
| | |acc_norm|0.2283|± |0.0238|
|hendrycksTest-prehistory | 1|acc |0.2099|± |0.0227|
| | |acc_norm|0.2099|± |0.0227|
|hendrycksTest-professional_accounting | 1|acc |0.2411|± |0.0255|
| | |acc_norm|0.2411|± |0.0255|
|hendrycksTest-professional_law | 1|acc |0.2458|± |0.0110|
| | |acc_norm|0.2458|± |0.0110|
|hendrycksTest-professional_medicine | 1|acc |0.3897|± |0.0296|
| | |acc_norm|0.3897|± |0.0296|
|hendrycksTest-professional_psychology | 1|acc |0.2141|± |0.0166|
| | |acc_norm|0.2141|± |0.0166|
|hendrycksTest-public_relations | 1|acc |0.1818|± |0.0369|
| | |acc_norm|0.1818|± |0.0369|
|hendrycksTest-security_studies | 1|acc |0.2490|± |0.0277|
| | |acc_norm|0.2490|± |0.0277|
|hendrycksTest-sociology | 1|acc |0.2537|± |0.0308|
| | |acc_norm|0.2537|± |0.0308|
|hendrycksTest-us_foreign_policy | 1|acc |0.2900|± |0.0456|
| | |acc_norm|0.2900|± |0.0456|
|hendrycksTest-virology | 1|acc |0.1807|± |0.0300|
| | |acc_norm|0.1807|± |0.0300|
|hendrycksTest-world_religions | 1|acc |0.1813|± |0.0295|
| | |acc_norm|0.1813|± |0.0295|
hf-causal-experimental (pretrained=/content/lm-evaluation-harness/artifacts/model-9gh18vfl:v25,use_accelerate=false,trust_remote_code=True), limit: None, provide_description: False, num_fewshot: 5, batch_size: 8
| Task |Version|Metric|Value | |Stderr|
|----------|------:|------|-----:|---|-----:|
|winogrande| 0|acc |0.5154|± | 0.014|
hf-causal-experimental (pretrained=/content/lm-evaluation-harness/artifacts/model-9gh18vfl:v25,use_accelerate=false,trust_remote_code=True), limit: None, provide_description: False, num_fewshot: 5, batch_size: 8
|Task |Version|Metric|Value| |Stderr|
|-----|------:|------|----:|---|-----:|
|gsm8k| 0|acc | 0|± | 0|
|
StrangeSX/SeaLLM-7B-v2.5-ORPO-SARAA-M
|
StrangeSX
| 2024-05-08T15:55:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:SeaLLMs/SeaLLM-7B-v2.5",
"base_model:finetune:SeaLLMs/SeaLLM-7B-v2.5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-08T15:55:37Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: SeaLLMs/SeaLLM-7B-v2.5
---
# Uploaded model
- **Developed by:** StrangeSX
- **License:** apache-2.0
- **Finetuned from model :** SeaLLMs/SeaLLM-7B-v2.5
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Labib11/PMC_bge_800
|
Labib11
| 2024-05-08T15:51:53Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-05-08T15:50:22Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
abc88767/model104
|
abc88767
| 2024-05-08T15:49:53Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-08T15:48:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sweet-dreambooths/black-eyed-peas-v1-crafted-variable-prompt-3-epochs-piano-prompts
|
sweet-dreambooths
| 2024-05-08T15:49:26Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"musicgen_melody",
"text-to-audio",
"sweet-dreambooths/black-eyed-peas-v1-piano-prompts",
"generated_from_trainer",
"base_model:facebook/musicgen-melody-large",
"base_model:adapter:facebook/musicgen-melody-large",
"license:cc-by-nc-4.0",
"region:us"
] |
text-to-audio
| 2024-05-08T15:15:06Z |
---
license: cc-by-nc-4.0
library_name: peft
tags:
- text-to-audio
- sweet-dreambooths/black-eyed-peas-v1-piano-prompts
- generated_from_trainer
base_model: facebook/musicgen-melody-large
model-index:
- name: black-eyed-peas-v1-crafted-variable-prompt-3-epochs-piano-prompts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sanchit-gandhi/black-eyed-peas/runs/ghk59xig)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sanchit-gandhi/black-eyed-peas/runs/ghk59xig)
# black-eyed-peas-v1-crafted-variable-prompt-3-epochs-piano-prompts
This model is a fine-tuned version of [facebook/musicgen-melody-large](https://huggingface.co/facebook/musicgen-melody-large) on the SWEET-DREAMBOOTHS/BLACK-EYED-PEAS-V1-PIANO-PROMPTS - DEFAULT dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0873
- Text Cosine Sim: 0.3268
- Audio Cosine Sim: 0.5888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 456
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.41.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.18.1.dev0
- Tokenizers 0.19.1
|
Aryaduta/modellora
|
Aryaduta
| 2024-05-08T15:49:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-08T15:48:51Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-2b-bnb-4bit
---
# Uploaded model
- **Developed by:** Aryaduta
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
minhquy1624/NLI_xnli_2label
|
minhquy1624
| 2024-05-08T15:48:30Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-05-08T15:45:17Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# minhquy1624/NLI_xnli_2label
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('minhquy1624/NLI_xnli_2label')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('minhquy1624/NLI_xnli_2label')
model = AutoModel.from_pretrained('minhquy1624/NLI_xnli_2label')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=minhquy1624/NLI_xnli_2label)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2046 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters:
```
{'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True}
```
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.1
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Labib11/PMC_bge_1600
|
Labib11
| 2024-05-08T15:44:13Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-05-08T15:42:59Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
hrishi27/doplhin-dpo2
|
hrishi27
| 2024-05-08T15:43:27Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"base_model:lvwerra/gpt2-imdb",
"base_model:finetune:lvwerra/gpt2-imdb",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-08T05:12:33Z |
---
base_model: lvwerra/gpt2-imdb
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: doplhin-dpo2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# doplhin-dpo2
This model is a fine-tuned version of [lvwerra/gpt2-imdb](https://huggingface.co/lvwerra/gpt2-imdb) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Litzy619/V0507HMA15HV2
|
Litzy619
| 2024-05-08T15:43:00Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-05-08T04:39:47Z |
---
license: mit
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: V0507HMA15HV2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0507HMA15HV2
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: -95.0498
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| -8.6532 | 0.09 | 10 | -10.3327 |
| -11.8451 | 0.18 | 20 | -14.2014 |
| -17.3537 | 0.27 | 30 | -22.7794 |
| -28.8388 | 0.36 | 40 | -38.5512 |
| -47.1885 | 0.45 | 50 | -59.6364 |
| -67.2573 | 0.54 | 60 | -76.6591 |
| -81.223 | 0.63 | 70 | -86.3414 |
| -87.9651 | 0.73 | 80 | -90.0475 |
| -91.3192 | 0.82 | 90 | -92.4350 |
| -92.7456 | 0.91 | 100 | -93.1825 |
| -93.4032 | 1.0 | 110 | -93.7378 |
| -93.8855 | 1.09 | 120 | -93.9331 |
| -94.0075 | 1.18 | 130 | -93.9987 |
| -94.001 | 1.27 | 140 | -94.3115 |
| -94.3566 | 1.36 | 150 | -94.4505 |
| -94.3346 | 1.45 | 160 | -94.2625 |
| -94.5793 | 1.54 | 170 | -94.3309 |
| -93.2701 | 1.63 | 180 | -93.4388 |
| -94.2829 | 1.72 | 190 | -93.8681 |
| -94.6778 | 1.81 | 200 | -94.7489 |
| -94.5762 | 1.9 | 210 | -94.7745 |
| -94.8427 | 1.99 | 220 | -94.8903 |
| -94.8653 | 2.08 | 230 | -94.8499 |
| -94.9237 | 2.18 | 240 | -94.9720 |
| -95.0027 | 2.27 | 250 | -94.9841 |
| -94.9404 | 2.36 | 260 | -94.8479 |
| -94.9594 | 2.45 | 270 | -95.0076 |
| -95.0772 | 2.54 | 280 | -95.0798 |
| -95.0775 | 2.63 | 290 | -95.0480 |
| -95.0528 | 2.72 | 300 | -95.0415 |
| -95.0652 | 2.81 | 310 | -95.0442 |
| -95.0738 | 2.9 | 320 | -95.0494 |
| -95.0694 | 2.99 | 330 | -95.0498 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.14.1
|
YorkieOH10/Mixtral-Instruct-ITR-8x7B-Q4_K_M-GGUF
|
YorkieOH10
| 2024-05-08T15:34:47Z | 2 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-08T15:31:20Z |
---
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
---
# YorkieOH10/Mixtral-Instruct-ITR-8x7B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Envoid/Mixtral-Instruct-ITR-8x7B`](https://huggingface.co/Envoid/Mixtral-Instruct-ITR-8x7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Envoid/Mixtral-Instruct-ITR-8x7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo YorkieOH10/Mixtral-Instruct-ITR-8x7B-Q4_K_M-GGUF --model mixtral-instruct-itr-8x7b.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo YorkieOH10/Mixtral-Instruct-ITR-8x7B-Q4_K_M-GGUF --model mixtral-instruct-itr-8x7b.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mixtral-instruct-itr-8x7b.Q4_K_M.gguf -n 128
```
|
alexm-nm/tinyllama-24-gptq-4bit-channelwise
|
alexm-nm
| 2024-05-08T15:34:24Z | 136 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-05-08T15:30:39Z |
---
license: apache-2.0
---
|
alexm-nm/tinyllama-24-marlin24-4bit-channelwise
|
alexm-nm
| 2024-05-08T15:31:07Z | 221 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-05-08T15:27:57Z |
---
license: apache-2.0
---
|
Pausaniax/Jhayco
|
Pausaniax
| 2024-05-08T15:30:40Z | 0 | 0 |
timm
|
[
"timm",
"music",
"audio-to-audio",
"es",
"dataset:HuggingFaceFW/fineweb",
"arxiv:1910.09700",
"license:mit",
"region:us"
] |
audio-to-audio
| 2024-05-08T15:26:56Z |
---
license: mit
datasets:
- HuggingFaceFW/fineweb
language:
- es
library_name: timm
pipeline_tag: audio-to-audio
tags:
- music
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
davelotito/donut-base-sroie-metrics-combined-new-instance-050824
|
davelotito
| 2024-05-08T15:26:16Z | 49 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-05-08T14:35:12Z |
---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
metrics:
- bleu
- wer
model-index:
- name: donut-base-sroie-metrics-combined-new-instance-050824
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie-metrics-combined-new-instance-050824
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6594
- Bleu: 0.0554
- Precisions: [0.7702970297029703, 0.671875, 0.6061381074168798, 0.5538922155688623]
- Brevity Penalty: 0.0858
- Length Ratio: 0.2894
- Translation Length: 505
- Reference Length: 1745
- Cer: 0.7675
- Wer: 0.8512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Cer | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|:------:|:------:|
| 3.4087 | 1.0 | 253 | 1.6485 | 0.0082 | [0.4555984555984556, 0.14316702819956617, 0.06188118811881188, 0.01440922190201729] | 0.0936 | 0.2968 | 518 | 1745 | 0.8353 | 0.9368 |
| 1.1999 | 2.0 | 506 | 0.8976 | 0.0274 | [0.6748878923766816, 0.5347043701799485, 0.4578313253012048, 0.3890909090909091] | 0.0543 | 0.2556 | 446 | 1745 | 0.7818 | 0.8754 |
| 0.8013 | 3.0 | 759 | 0.7295 | 0.0508 | [0.7580645161290323, 0.6583143507972665, 0.5890052356020943, 0.5384615384615384] | 0.0806 | 0.2842 | 496 | 1745 | 0.7629 | 0.8557 |
| 0.6404 | 4.0 | 1012 | 0.6594 | 0.0554 | [0.7702970297029703, 0.671875, 0.6061381074168798, 0.5538922155688623] | 0.0858 | 0.2894 | 505 | 1745 | 0.7675 | 0.8512 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.1.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Szczotar93/Inkaso_beta
|
Szczotar93
| 2024-05-08T15:22:22Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"layoutlm",
"token-classification",
"generated_from_trainer",
"dataset:layoutlmv3",
"base_model:microsoft/layoutlm-base-uncased",
"base_model:finetune:microsoft/layoutlm-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-05-07T19:10:54Z |
---
license: mit
base_model: microsoft/layoutlm-base-uncased
tags:
- generated_from_trainer
datasets:
- layoutlmv3
model-index:
- name: Inkaso_beta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Inkaso_beta
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0801
- Creditor address: {'precision': 1.0, 'recall': 0.875, 'f1': 0.9333333333333333, 'number': 48}
- Creditor name: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 34}
- Creditor proxy: {'precision': 0.8333333333333334, 'recall': 0.8108108108108109, 'f1': 0.8219178082191781, 'number': 37}
- Debtor address: {'precision': 0.9636363636363636, 'recall': 1.0, 'f1': 0.9814814814814815, 'number': 53}
- Debtor name: {'precision': 0.9428571428571428, 'recall': 1.0, 'f1': 0.9705882352941176, 'number': 33}
- Doc id: {'precision': 0.85, 'recall': 0.8947368421052632, 'f1': 0.8717948717948718, 'number': 19}
- Title: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 34}
- Overall Precision: 0.9492
- Overall Recall: 0.9419
- Overall F1: 0.9455
- Overall Accuracy: 0.9831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Creditor address | Creditor name | Creditor proxy | Debtor address | Debtor name | Doc id | Title | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.4642 | 6.6667 | 20 | 0.2502 | {'precision': 0.782608695652174, 'recall': 0.75, 'f1': 0.7659574468085107, 'number': 48} | {'precision': 0.9354838709677419, 'recall': 0.8529411764705882, 'f1': 0.8923076923076922, 'number': 34} | {'precision': 0.8, 'recall': 0.6486486486486487, 'f1': 0.7164179104477612, 'number': 37} | {'precision': 0.8205128205128205, 'recall': 0.6037735849056604, 'f1': 0.6956521739130435, 'number': 53} | {'precision': 0.95, 'recall': 0.5757575757575758, 'f1': 0.7169811320754716, 'number': 33} | {'precision': 1.0, 'recall': 0.2631578947368421, 'f1': 0.4166666666666667, 'number': 19} | {'precision': 0.8461538461538461, 'recall': 0.3235294117647059, 'f1': 0.46808510638297873, 'number': 34} | 0.8478 | 0.6047 | 0.7059 | 0.9330 |
| 0.1387 | 13.3333 | 40 | 0.0914 | {'precision': 1.0, 'recall': 0.9166666666666666, 'f1': 0.9565217391304348, 'number': 48} | {'precision': 0.9714285714285714, 'recall': 1.0, 'f1': 0.9855072463768115, 'number': 34} | {'precision': 0.7777777777777778, 'recall': 0.7567567567567568, 'f1': 0.7671232876712328, 'number': 37} | {'precision': 0.9444444444444444, 'recall': 0.9622641509433962, 'f1': 0.9532710280373832, 'number': 53} | {'precision': 0.8918918918918919, 'recall': 1.0, 'f1': 0.9428571428571428, 'number': 33} | {'precision': 0.8095238095238095, 'recall': 0.8947368421052632, 'f1': 0.8500000000000001, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 34} | 0.9234 | 0.9341 | 0.9287 | 0.9795 |
| 0.0431 | 20.0 | 60 | 0.0774 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 48} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 34} | {'precision': 0.8181818181818182, 'recall': 0.7297297297297297, 'f1': 0.7714285714285715, 'number': 37} | {'precision': 0.9636363636363636, 'recall': 1.0, 'f1': 0.9814814814814815, 'number': 53} | {'precision': 0.9428571428571428, 'recall': 1.0, 'f1': 0.9705882352941176, 'number': 33} | {'precision': 0.7727272727272727, 'recall': 0.8947368421052632, 'f1': 0.8292682926829269, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 34} | 0.9425 | 0.9535 | 0.9480 | 0.9837 |
| 0.0216 | 26.6667 | 80 | 0.0842 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 48} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 34} | {'precision': 0.7631578947368421, 'recall': 0.7837837837837838, 'f1': 0.7733333333333334, 'number': 37} | {'precision': 0.9454545454545454, 'recall': 0.9811320754716981, 'f1': 0.9629629629629629, 'number': 53} | {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 33} | {'precision': 0.8095238095238095, 'recall': 0.8947368421052632, 'f1': 0.8500000000000001, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 34} | 0.9286 | 0.9574 | 0.9427 | 0.9825 |
| 0.0142 | 33.3333 | 100 | 0.0840 | {'precision': 1.0, 'recall': 0.875, 'f1': 0.9333333333333333, 'number': 48} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 34} | {'precision': 0.8333333333333334, 'recall': 0.8108108108108109, 'f1': 0.8219178082191781, 'number': 37} | {'precision': 0.9629629629629629, 'recall': 0.9811320754716981, 'f1': 0.9719626168224299, 'number': 53} | {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 33} | {'precision': 0.8095238095238095, 'recall': 0.8947368421052632, 'f1': 0.8500000000000001, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 34} | 0.9416 | 0.9380 | 0.9398 | 0.9819 |
| 0.0105 | 40.0 | 120 | 0.0838 | {'precision': 0.9772727272727273, 'recall': 0.8958333333333334, 'f1': 0.9347826086956522, 'number': 48} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 34} | {'precision': 0.8333333333333334, 'recall': 0.8108108108108109, 'f1': 0.8219178082191781, 'number': 37} | {'precision': 0.9636363636363636, 'recall': 1.0, 'f1': 0.9814814814814815, 'number': 53} | {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 33} | {'precision': 0.8095238095238095, 'recall': 0.8947368421052632, 'f1': 0.8500000000000001, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 34} | 0.9385 | 0.9457 | 0.9421 | 0.9819 |
| 0.0081 | 46.6667 | 140 | 0.0801 | {'precision': 1.0, 'recall': 0.875, 'f1': 0.9333333333333333, 'number': 48} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 34} | {'precision': 0.8333333333333334, 'recall': 0.8108108108108109, 'f1': 0.8219178082191781, 'number': 37} | {'precision': 0.9636363636363636, 'recall': 1.0, 'f1': 0.9814814814814815, 'number': 53} | {'precision': 0.9428571428571428, 'recall': 1.0, 'f1': 0.9705882352941176, 'number': 33} | {'precision': 0.85, 'recall': 0.8947368421052632, 'f1': 0.8717948717948718, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 34} | 0.9492 | 0.9419 | 0.9455 | 0.9831 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1
|
whizzzzkid/nous_sevens7_t
|
whizzzzkid
| 2024-05-08T15:21:10Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-07T21:18:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sweet-dreambooths/black-eyed-peas-v1-crafted-prompt-3-epochs-piano-prompts
|
sweet-dreambooths
| 2024-05-08T15:17:26Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"musicgen_melody",
"text-to-audio",
"sweet-dreambooths/black-eyed-peas-v1-piano-prompts",
"generated_from_trainer",
"base_model:facebook/musicgen-melody-large",
"base_model:adapter:facebook/musicgen-melody-large",
"license:cc-by-nc-4.0",
"region:us"
] |
text-to-audio
| 2024-05-08T14:09:52Z |
---
license: cc-by-nc-4.0
library_name: peft
tags:
- text-to-audio
- sweet-dreambooths/black-eyed-peas-v1-piano-prompts
- generated_from_trainer
base_model: facebook/musicgen-melody-large
model-index:
- name: black-eyed-peas-v1-crafted-prompt-3-epochs-piano-prompts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sanchit-gandhi/black-eyed-peas/runs/4jpcauy7)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sanchit-gandhi/black-eyed-peas/runs/4jpcauy7)
# black-eyed-peas-v1-crafted-prompt-3-epochs-piano-prompts
This model is a fine-tuned version of [facebook/musicgen-melody-large](https://huggingface.co/facebook/musicgen-melody-large) on the SWEET-DREAMBOOTHS/BLACK-EYED-PEAS-V1-PIANO-PROMPTS - DEFAULT dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0910
- Text Cosine Sim: 0.4804
- Audio Cosine Sim: 0.5601
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 456
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.41.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.18.1.dev0
- Tokenizers 0.19.1
|
KaraKaraWitch/UnFimbulvetr-20B
|
KaraKaraWitch
| 2024-05-08T15:16:37Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:Sao10K/Fimbulvetr-11B-v2",
"base_model:finetune:Sao10K/Fimbulvetr-11B-v2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-07T16:06:50Z |
---
base_model: ["Sao10K/Fimbulvetr-11B-v2"]
library_name: transformers
tags:
- mergekit
- merge
---
# UnFimbulvetr-20B

*Waifu to catch your attention*
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
NOTE: *Only tested this just for a bit. YMMV.*
## Next Day Tests...
Downloaded the GGUF model that someone quantized... And... nope. No.
**Do not use model.**
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* Sao10K/Fimbulvetr-11B-v2
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: FimbMagic
layer_range: [0, 13]
- sources:
- model: FimbMagic
layer_range: [8, 13]
- sources:
- model: FimbMagic
layer_range: [12, 36]
- sources:
- model: FimbMagic
layer_range: [12, 36]
- sources:
- model: FimbMagic
layer_range: [36, 48]
- sources:
- model: FimbMagic
layer_range: [36, 48]
merge_method: passthrough
dtype: bfloat16
```
### Additional Notes
Fimbulvetr 11B is still a very good model. This model is for extreme trailblazers who wants to test stuff!
Eval results? Don't bother.
Last one before I sleep: *I'm so sorry Sao10K...*
|
johnnyf/ppo-LunarLander-v2
|
johnnyf
| 2024-05-08T15:13:54Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-05-08T15:13:36Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.54 +/- 15.35
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP
|
Fischerboot
| 2024-05-08T15:13:12Z | 33 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"base_model:Fischerboot/Llama3-Aura-3some-SLERP",
"base_model:merge:Fischerboot/Llama3-Aura-3some-SLERP",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:merge:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-07T20:53:26Z |
---
base_model:
- Orenguteng/Llama-3-8B-Lexi-Uncensored
- Fischerboot/Llama3-Aura-3some-SLERP
library_name: transformers
tags:
- mergekit
- merge
license: llama3
language:
- en
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored)
* [Fischerboot/Llama3-Aura-3some-SLERP](https://huggingface.co/Fischerboot/Llama3-Aura-3some-SLERP)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Fischerboot/Llama3-Aura-3some-SLERP
layer_range:
- 0
- 32
- model: Orenguteng/Llama-3-8B-Lexi-Uncensored
layer_range:
- 0
- 32
merge_method: slerp
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
parameters:
t:
- filter: self_attn
value:
- 0
- 0.5
- 0.3
- 0.7
- 1
- filter: mlp
value:
- 1
- 0.5
- 0.7
- 0.3
- 0
- value: 0.5
dtype: bfloat16
```
|
Driseri/lora_model_llama3-8B-low1
|
Driseri
| 2024-05-08T15:07:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-08T15:06:13Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** Driseri
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
keanhean/esm2_t33_650M_UR50D-pfam-classification
|
keanhean
| 2024-05-08T15:02:02Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"esm",
"text-classification",
"generated_from_trainer",
"base_model:facebook/esm2_t33_650M_UR50D",
"base_model:finetune:facebook/esm2_t33_650M_UR50D",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-07T17:01:14Z |
---
license: mit
base_model: facebook/esm2_t33_650M_UR50D
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: esm2_t33_650M_UR50D-pfam-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esm2_t33_650M_UR50D-pfam-classification
This model is a fine-tuned version of [facebook/esm2_t33_650M_UR50D](https://huggingface.co/facebook/esm2_t33_650M_UR50D) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0743
- Accuracy: 0.9882
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.9930 | 71 | 0.2934 | 0.9450 |
| No log | 2.0 | 143 | 0.0912 | 0.9856 |
| No log | 2.9790 | 213 | 0.0743 | 0.9882 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Rimyy/Gpt2Gsm5
|
Rimyy
| 2024-05-08T15:01:30Z | 147 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-08T15:01:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ShakaRover/XiaoYuEr-llama3
|
ShakaRover
| 2024-05-08T15:01:07Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-05-08T15:01:07Z |
---
license: apache-2.0
---
|
Arezo93/xlm-roberta-base-finetuned-language-detection
|
Arezo93
| 2024-05-08T15:00:40Z | 104 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-08T15:00:01Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-finetuned-language-detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-language-detection
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1487
- Accuracy: 0.9509
- F1: 0.9508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3214 | 1.0 | 727 | 0.1715 | 0.9450 | 0.9447 |
| 0.1512 | 2.0 | 1454 | 0.1487 | 0.9509 | 0.9508 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
Jillian/Gemma-2B-4bit_DepressionDetection
|
Jillian
| 2024-05-08T14:59:05Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-08T14:34:51Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sourabh1407/gemma-2b-it-finetuned-dolly
|
Sourabh1407
| 2024-05-08T14:58:57Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2b-it",
"base_model:adapter:google/gemma-2b-it",
"license:gemma",
"region:us"
] | null | 2024-05-08T14:01:48Z |
---
license: gemma
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b-it
model-index:
- name: gemma-2b-it-finetuned-dolly
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-2b-it-finetuned-dolly
This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
qunfengd/esm2_t12_35M_UR50D-finetuned-AMP_Antibacteria-classification_CombinedPos5Epoch
|
qunfengd
| 2024-05-08T14:48:43Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"esm",
"text-classification",
"generated_from_keras_callback",
"base_model:facebook/esm2_t12_35M_UR50D",
"base_model:finetune:facebook/esm2_t12_35M_UR50D",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-08T14:48:21Z |
---
license: mit
tags:
- generated_from_keras_callback
base_model: facebook/esm2_t12_35M_UR50D
model-index:
- name: esm2_t12_35M_UR50D-finetuned-AMP_Antibacteria-classification_CombinedPos5Epoch
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# esm2_t12_35M_UR50D-finetuned-AMP_Antibacteria-classification_CombinedPos5Epoch
This model is a fine-tuned version of [facebook/esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2090
- Train Accuracy: 0.9111
- Validation Loss: 0.5719
- Validation Accuracy: 0.7842
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.0}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.6535 | 0.6007 | 0.6349 | 0.6217 | 0 |
| 0.5923 | 0.6721 | 0.5965 | 0.6733 | 1 |
| 0.4529 | 0.7850 | 0.5529 | 0.7241 | 2 |
| 0.2886 | 0.8738 | 0.5695 | 0.7650 | 3 |
| 0.2090 | 0.9111 | 0.5719 | 0.7842 | 4 |
### Framework versions
- Transformers 4.40.1
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
SirDamisola/iw_state_of_nation_lora_model_FT
|
SirDamisola
| 2024-05-08T14:33:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.1-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.1-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-08T14:33:29Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-instruct-v0.1-bnb-4bit
---
# Uploaded model
- **Developed by:** SirDamisola
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.1-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
enchan1/ppo-lunarlander
|
enchan1
| 2024-05-08T14:29:12Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-05-08T14:27:46Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 266.64 +/- 20.70
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sravaniayyagari/lora_model_5_safetensors_duplicates
|
sravaniayyagari
| 2024-05-08T14:26:38Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"region:us"
] | null | 2024-05-08T14:23:30Z |
---
library_name: peft
base_model: meta-llama/Meta-Llama-3-8B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
pcuenq/codegemma-7b-it-tmp
|
pcuenq
| 2024-05-08T14:18:03Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-08T14:14:43Z |
---
library_name: transformers
pipeline_tag: text-generation
license: gemma
license_link: https://ai.google.dev/gemma/terms
---
# CodeGemma
Model Page
: [CodeGemma](https://ai.google.dev/gemma/docs/codegemma)
Resources and Technical Documentation
: [Technical Report](https://goo.gle/codegemma)
: [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
Terms of Use
: [Terms](https://ai.google.dev/gemma/terms)
Authors
: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
CodeGemma is a collection of lightweight open code models built on top of Gemma. CodeGemma models are text-to-text and text-to-code decoder-only models and are available as a 7 billion pretrained variant that specializes in code completion and code generation tasks, a 7 billion parameter instruction-tuned variant for code chat and instruction following and a 2 billion parameter pretrained variant for fast code completion.
| | [codegemma-2b](https://huggingface.co/google/codegemma-2b) | [codegemma-7b](https://huggingface.co/google/codegemma-7b) | [**codegemma-7b-it**](https://huggingface.co/google/codegemma-7b-it) |
|----------------------------------|:----------------------------------------------------------------:|:----------------------------------------------------------:|:----------------------------------------------------------------:|
| Code Completion | ✅ | ✅ | |
| Generation from natural language | | ✅ | ✅ |
| Chat | | | ✅ |
| Instruction Following | | | ✅ |
### Sample Usage
This model is intended to answer questions about code fragments, to generate code from natural language, or to engage in a conversation with the user about programming or technical problems. If you need to use code completion (for example, integrated in an IDE), we recommend you use one of the pre-trained models instead: [CodeGemma 7B](https://huggingface.co/google/codegemma-7b), or [CodeGemma 2B](https://huggingface.co/google/codegemma-2b).
#### For Code Generation
```python
from transformers import GemmaTokenizer, AutoModelForCausalLM
tokenizer = GemmaTokenizer.from_pretrained("google/codegemma-7b-it")
model = AutoModelForCausalLM.from_pretrained("google/codegemma-7b-it")
input_text = "Write me a Python function to calculate the nth fibonacci number."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/codegemma-7b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
```
### Inputs and Outputs
Inputs
: For pretrained model variants: code prefix and/or suffix for code completion and generation scenarios, or natural language text or prompt
: For instruction tuned model variant: natural language text or prompt
Outputs
: For pretrained model variants: fill-in-the-middle code completion, code and natural language
: For instruction tuned model variant: code and natural language
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
Using Gemma as the base model, CodeGemma 2B and 7B pretrained variants are further trained on an additional 500 billion tokens of primarily English language data from publicly available code repositories, open source mathematics datasets and synthetically generated code.
### Training Data Processing
The following data pre-processing techniques were applied:
* FIM Pretrained CodeGemma models focus on fill-in-the-middle (FIM) tasks. The models are trained to work with both PSM and SPM modes. Our FIM settings are 80% FIM rate with 50-50 PSM/SPM.
* Dependency Graph-based Packing and Unit Test-based Lexical Packing techniques: To improve model alignment with real-world applications, we structured training examples at the project/repository level to co-locate the most relevant source files within each repository. Specifically, we employed two heuristic techniques: dependency graph-based packing and unit test-based lexical packing
* We developed a novel technique for splitting the documents into prefix, middle, and suffix to make the suffix start in a more syntactically natural point rather than purely random distribution.
* Safety: Similarly to Gemma, we deployed rigorous safety filtering including filtering personal data, CSAM filtering and other filtering based on content quality and safety in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Information about the hardware and software used to train the models.
### Hardware
CodeGemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/).
## Evaluation Information
Model evaluation metrics and results.
### Evaluation Approach
We evaluate CodeGemma on a variety of academic benchmarks across several domains:
* Code completion benchmarks: HumanEval Single Line and Multiple Line Infilling
* Code generation benchmarks: HumanEval, MBPP, BabelCode (C++, C#, Go, Java, JavaScript, Kotlin, Python, Rust)
* Q&A: BoolQ, PIQA, TriviaQA
* Natural Language: ARC-Challenge, HellaSwag, MMLU, WinoGrande
* Math Reasoning: GSM8K, MATH
### Evaluation Results
#### Coding Benchmarks
Benchmark | 2B | 7B | 7B-IT
----------------------|-------|-------|------
HumanEval | 31.1 | 44.5 | 56.1
MBPP | 43.6 | 56.2 | 54.2
HumanEval Single Line | 78.41 | 76.09 | 68.25
HumanEval Multi Line | 51.44 | 58.44 | 20.05
BC HE C++ | 24.2 | 32.9 | 42.2
BC HE C# | 10.6 | 22.4 | 26.7
BC HE Go | 20.5 | 21.7 | 28.6
BC HE Java | 29.2 | 41.0 | 48.4
BC HE JavaScript | 21.7 | 39.8 | 46.0
BC HE Kotlin | 28.0 | 39.8 | 51.6
BC HE Python | 21.7 | 42.2 | 48.4
BC HE Rust | 26.7 | 34.1 | 36.0
BC MBPP C++ | 47.1 | 53.8 | 56.7
BC MBPP C# | 28.7 | 32.5 | 41.2
BC MBPP Go | 45.6 | 43.3 | 46.2
BC MBPP Java | 41.8 | 50.3 | 57.3
BC MBPP JavaScript | 45.3 | 58.2 | 61.4
BC MBPP Kotlin | 46.8 | 54.7 | 59.9
BC MBPP Python | 38.6 | 59.1 | 62.0
BC MBPP Rust | 45.3 | 52.9 | 53.5
#### Natural Language Benchmarks

## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including:
* Human evaluation on prompts covering content safety and representational harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for more details on evaluation approach.
* Specific testing of cyber-offence capabilities, focusing on testing autonomous hacking capabilities and ensuring potential harms are limited.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety, representational harms, memorization, large-scale harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_results) for more details.
## Model Usage & Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Code Gemma models have a wide range of applications, which vary between IT and PT models. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development.
Code Completion
: PT models can be used to complete code with an IDE extension
Code Generation
: IT model can be used to generate code with or without an IDE extension
Code Conversation
: IT model can power conversation interfaces which discuss code.
Code Education
: IT model supports interactive code learning experiences, aids in syntax correction or provides coding practice.
### Known Limitations
Large Language Models (LLMs) have limitations based on their training data and the inherent limitations of the technology. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_results) for more details on the limitations of LLMs.
### Ethical Considerations & Risks
The development of large language models (LLMs) raises several ethical concerns. We have carefully considered multiple aspects in the development of these models. Please refer to [the same discussion](https://ai.google.dev/gemma/docs/model_card#ethical_considerations_and_risks) in the Gemma model card for model details.
### Benefits
At the time of release, this family of models provides high-performance open code-focused large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models.
Using the coding benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives.
|
alexm-nm/tinyllama-24-gptq-4bit-g128
|
alexm-nm
| 2024-05-08T14:15:23Z | 3,947 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-05-08T14:08:00Z |
---
license: apache-2.0
---
|
NoteDance/DeiT
|
NoteDance
| 2024-05-08T14:15:10Z | 0 | 0 |
tf
|
[
"tf",
"Note",
"vit",
"image-classification",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2024-05-08T14:11:58Z |
---
license: apache-2.0
pipeline_tag: image-classification
tags:
- Note
- vit
library_name: tf
---
This model is built by Note, Note can be found [here](https://github.com/NoteDance/Note). The model can be found [here](https://github.com/NoteDance/Note/blob/Note-7.0/Note/neuralnetwork/tf/DeiT.py). The tutorial can be found [here](https://github.com/NoteDance/Note-documentation/tree/tf-7.0).
|
ThatOneSkyler/Reinforce-Pixelcopter-PLE-v0
|
ThatOneSkyler
| 2024-05-08T14:14:43Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-05-08T14:06:50Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 17.90 +/- 17.69
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
aumoai/Mixtral-8x7B-Instruct-v0.1-touch-rugby-rules-adapters
|
aumoai
| 2024-05-08T14:12:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-08T14:12:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zhanjun/path-to-save-model
|
zhanjun
| 2024-05-08T14:10:53Z | 30 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-05-08T14:05:42Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
base_model: CompVis/stable-diffusion-v1-4
inference: true
instance_prompt: a photo of sks dog
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - zhanjun/path-to-save-model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
jerryyun/kicon_mixtral87_merged_41766
|
jerryyun
| 2024-05-08T14:09:30Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-08T14:04:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
YYYYYYibo/nash_dpo_doff_no_golden_iter_1
|
YYYYYYibo
| 2024-05-08T14:08:58Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"dataset:updated",
"dataset:original",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:adapter:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"region:us"
] | null | 2024-05-07T11:51:15Z |
---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
base_model: alignment-handbook/zephyr-7b-sft-full
datasets:
- updated
- original
model-index:
- name: nash_dpo_doff_no_golden_iter_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nash_dpo_doff_no_golden_iter_1
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the updated and the original datasets.
It achieves the following results on the evaluation set:
- Logits/chosen: -2.7444
- Logits/rejected: -2.6646
- Logps/chosen: -288.1525
- Logps/rejected: -266.5563
- Loss: 0.6749
- Rewards/accuracies: 0.6100
- Rewards/chosen: -0.0411
- Rewards/margins: 0.0493
- Rewards/rejected: -0.0904
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Logits/chosen | Logits/rejected | Logps/chosen | Logps/rejected | Validation Loss | Rewards/accuracies | Rewards/chosen | Rewards/margins | Rewards/rejected |
|:-------------:|:-----:|:----:|:-------------:|:---------------:|:------------:|:--------------:|:---------------:|:------------------:|:--------------:|:---------------:|:----------------:|
| 0.6794 | 0.65 | 100 | -2.7444 | -2.6646 | -288.1525 | -266.5563 | 0.6749 | 0.6100 | -0.0411 | 0.0493 | -0.0904 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
ChiJuiChen/lab7_model
|
ChiJuiChen
| 2024-05-08T14:00:43Z | 150 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-25T08:22:22Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: lab7_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lab7_model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7172
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 2.7274 | 0.4614 | 5000 | 2.0313 |
| 1.887 | 0.9228 | 10000 | 1.7172 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
Softy-lines/Pixel-Digit-Classifier
|
Softy-lines
| 2024-05-08T13:57:17Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"image-to-text",
"en",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2024-05-08T13:46:50Z |
---
license: apache-2.0
language:
- en
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: image-to-text
---
# Model Card for Pixelated Captcha Digit Detection
## Model Details
- **License:** Apache-2.0
- **Developed by:** Saidi Souhaieb
- **Finetuned from model:** YOLOv8
## Uses
This model is designed to detect pixelated captcha digits by showing bounding boxes and extracting the coordinates of the detections.
## How to Get Started with the Model
```python
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
import torchvision.transforms as transforms
from torchvision.datasets import ImageFolder
from tqdm import tqdm
from PIL import Image
import torch.nn.functional as F
import os
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=3, stride=1, padding=1)
self.conv2 = nn.Conv2d(16, 32, kernel_size=3, stride=1, padding=1)
self.conv3 = nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1)
self.pool = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
self.fc2 = nn.Linear(500, 10) # 10 classes for example
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
x = x.view(-1, 64 * 4 * 4)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
transform = transforms.Compose([
transforms.Resize((32, 32)), # Adjust the size accordingly
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
transform = transforms.Compose([
transforms.Resize((32, 32)), # Adjust the size accordingly
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
model = CNN()
model.load_state_dict(torch.load('models/99acc_model.pth'))
def predict_number(folder_path):
"""
Predict the numbers in the images in the folder
"""
predict_numbers = []
for file in os.listdir(folder_path):
input_image = Image.open(f"temp/{file}").convert('RGB')
# Load and preprocess the input image
input_tensor = transform(input_image)
input_batch = input_tensor.unsqueeze(0) # Add a batch dimension
# Perform inference
with torch.no_grad():
output = model(input_batch)
# Get the predicted class label
_, predicted = torch.max(output, 1)
# Print the predicted class label
print("Predicted class label:", predicted.item(), "file", file)
predict_numbers.append(predicted.item())
return predict_numbers
```
## Training Details
### Training Data
Pixel Digit Captcha Data [https://huggingface.co/datasets/Softy-lines/Pixel-Digit-Captcha-Data]
## Model Card Authors
[Saidi Souhaieb]
|
hamzamurtaza/xml_long_diagrams
|
hamzamurtaza
| 2024-05-08T13:52:30Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-05-08T13:52:19Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
zbigi/BART_raw
|
zbigi
| 2024-05-08T13:45:36Z | 180 | 1 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-03T02:10:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
BART with random values of the parameters. Output is a sequence of random tokens. Useful for comparison
<!--BART with random values of parameters. Produces output that is random tokens -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
abc88767/model103
|
abc88767
| 2024-05-08T13:42:33Z | 131 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-08T13:40:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
batatar/llama-3-8b-chat-guanaco
|
batatar
| 2024-05-08T13:40:23Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-08T13:36:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Harish-Gokul/AI-Calling-ORPO-Finetuned-llama3-model-change-in-dataset
|
Harish-Gokul
| 2024-05-08T13:38:02Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-08T13:35:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alexm-nm/tinyllama-24-marlin24-4bit-g128
|
alexm-nm
| 2024-05-08T13:37:29Z | 15,404 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-05-08T13:32:27Z |
---
license: apache-2.0
---
|
baek26/all_3769_all_6417_bart-base_rl
|
baek26
| 2024-05-08T13:33:58Z | 51 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2024-05-08T13:33:16Z |
---
license: apache-2.0
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="baek26//tmp/tmpnlyx7ph2/baek26/all_3769_all_6417_bart-base_rl")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("baek26//tmp/tmpnlyx7ph2/baek26/all_3769_all_6417_bart-base_rl")
model = AutoModelForCausalLMWithValueHead.from_pretrained("baek26//tmp/tmpnlyx7ph2/baek26/all_3769_all_6417_bart-base_rl")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
baconnier/Gaston-Llama-3-8B
|
baconnier
| 2024-05-08T13:33:08Z | 26 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-04T10:09:55Z |
# Vous en avez assez du jargon administratif incompréhensible ?
Gaston est là pour vous aider !

💡 Cette IA a été conçue pour reformuler les communications et documents administratifs en langage clair et simple.
📝 Grâce à Gaston, fini les lettres obscures et les procédures nébuleuses. Tout devient limpide et à la portée du commun des mortels.
😊 Gaston est un POC (Proof of Concept) qui a pour mission de rendre l'administration plus transparente et accessible.
🙌 Son secret ? Une capacité à analyser et à traduire le jargon en termes compréhensibles par tous.
💬 Avec Gaston, les démarches administratives deviennent enfin un jeu d'enfant !
This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE)
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- orpo
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
---
# Uploaded model
- **Developed by:** baconnier
- **License:** apache-2.0
- **Finetuned from model :** NousResearch/Hermes-2-Pro-Llama-3-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
This model was trained ORPO , using ChatML prompt template format.
```
<|im_start|>user
Qui est tu ?
<|im_end|>
<|im_start|>assistant
```
# Example with local TGI:
See the snippet below for usage with local inference:
```python
#Example: reuse your existing OpenAI setup
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8080/v1", api_key="TGI")
completion = client.chat.completions.create(
model="baconnier/Gaston-Llama-3-8B",
messages=[
{"role": "system", "content": "Reponds comme Gaston, un fonctionnaire français qui s'exprime uniquement en français avec des termes tres simples et surtout sans jamais utiliser de jargon administratif."},
{"role": "user", "content": "l’action en nullité se prescrit par 5 ans ou est prescrite en 5 ans"},
{"role": "assistant", "content": ""}
],
max_tokens=400,
temperature=0.7,
)
print(completion.choices[0].message.content)
```
Output:
```
Le délai de prescription pour l'action en nullité est généralement de cinq ans à compter du jour où
la partie qui peut agir a eu connaissance des faits, selon le Code civil français.
```
# Example with Transformers and pipeline:
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "baconnier/Gaston-Llama-3-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda",
)
messages=[
{"role": "system", "content": "Reponds comme Gaston, un fonctionnaire français qui s'exprime uniquement en français
avec des termes tres simples et surtout sans jamais utiliser de jargon administratif."},
{"role": "user", "content": "Vous voudrez bien vous munir de toutes les pièces justificatives afférentes à vos déclarations de revenus
et de patrimoine des 3 derniers exercices, nonobstant les éléments déjà produits.
À défaut de réponse de votre part ou en cas de non comparution aux date et heure susvisées,
la procédure de contrôle se poursuivra et nous serons fondés à retenir les éléments en notre possession
en vue de l'établissement des impositions supplémentaires et pénalités subséquentes."},
{"role": "assistant", "content": ""}
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
outputs = pipeline(
prompt,
max_new_tokens=256,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
Question:
```
Vous voudrez bien vous munir de toutes les pièces justificatives afférentes à vos déclarations de revenus
et de patrimoine des 3 derniers exercices, nonobstant les éléments déjà produits.
À défaut de réponse de votre part ou en cas de non comparution aux date et heure susvisées,
la procédure de contrôle se poursuivra et nous serons fondés à retenir les éléments en notre possession
en vue de l'établissement des impositions supplémentaires et pénalités subséquentes.
```
Reponse:
```
Je vous invite à fournir toutes les pièces justificatives concernant vos déclarations de revenus
et de patrimoine des trois dernières années, malgré les éléments déjà fournis.
En cas de non-respect de ce délai ou de manque de réponse,
la procédure de vérification continuera, et nous utiliserons les informations en notre possession
pour établir les impôts supplémentaires et les pénalités associées.
```
Question:
```
Nous accusons réception de votre demande de renouvellement de passeport en date du 15 mars 2023.
Après instruction de votre dossier, il appert que les pièces produites s'avèrent insuffisantes
eu égard aux dispositions de l'article 5-1 du décret n°2005-1726 du 30 décembre 2005.
Nonobstant les justificatifs déjà fournis, il vous appartient de nous transmettre
dans les meilleurs délais les documents complémentaires listés ci-après :
- Copie d'un justificatif de domicile de moins de 6 mois
- Timbre fiscal d'un montant de 86€
À défaut de production des pièces susvisées dans un délai de 30 jours à compter de la présente,
votre demande sera classée sans suite.
Nous vous prions d'agréer, Monsieur, l'expression de nos salutations distinguées.
```
Reponse:
```
Nous recevons votre requête pour renouveler votre passeport datant du 15 mars 2023.
Suite à l'examen de votre dossier, il a été déterminé que les documents fournis
ne respectent pas les exigences mentionnées dans la section 5-1 du décret n°2005-1726 du 30 décembre 2005.
Malgré les pièces déjà fournies, vous devez nous envoyer rapidement les éléments suivants :
- Une copie d'un justificatif de domicile datant de moins de 6 mois.
- Un timbre fiscal d'une valeur de 86 €.
Si ces documents ne sont pas fournis dans un délai de 30 jours à compter de cette lettre, votre demande sera close.
Nous vous prions d'accepter, Monsieur, l'expression de nos salutations respectueuses.
```
# Example with Transformers:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("baconnier/Gaston-Llama-3-8B")
model = AutoModelForCausalLM.from_pretrained("baconnier/Gaston-Llama-3-8B")
prompt = "Nonobstant les justificatifs déjà fournis"
inputs = tokenizer(prompt, return_tensors="pt")
# Generate
generate_ids = model.generate(inputs.input_ids, max_length=200)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
```
|
buelfhood/irc-facebook-react
|
buelfhood
| 2024-05-08T13:31:11Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:IRC_Adapters",
"roberta",
"dataset:IRC",
"region:us"
] | null | 2024-05-08T12:43:19Z |
---
tags:
- adapter-transformers
- adapterhub:IRC_Adapters
- roberta
datasets:
- IRC
---
# Adapter `buelfhood/irc-facebook-react` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [IRC_Adapters](https://adapterhub.ml/explore/IRC_Adapters/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("buelfhood/irc-facebook-react", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
Kendamarron/Tokara-0.5B-Chat-dolly-jimba
|
Kendamarron
| 2024-05-08T13:30:36Z | 152 | 0 |
transformers
|
[
"transformers",
"pytorch",
"qwen2",
"text-generation",
"conversational",
"ja",
"en",
"dataset:kunishou/databricks-dolly-15k-ja",
"dataset:Kendamarron/jimba-instuction-1k-beta",
"dataset:Kendamarron/pret-a-porter-instruction-v0.1",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-06T15:24:56Z |
---
license: other
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/Qwen1.5-0.5B/blob/main/LICENSE
language:
- ja
- en
pipeline_tag: text-generation
datasets:
- kunishou/databricks-dolly-15k-ja
- Kendamarron/jimba-instuction-1k-beta
- Kendamarron/pret-a-porter-instruction-v0.1
---
## モデルについて
[Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B)を日英データ5Bトークンで継続事前学習した[Tokara-0.5B-v0.1](https://huggingface.co/Kendamarron/Tokara-0.5B-v0.1)を日本語instructionデータセットでファインチューニングしたモデルです。
0.5Bというモデルサイズにしてはコミュニケーションが行えるモデルになっています。
学習データにマルチターンのデータセットを含めているため、複数ターンの会話も行えるはずです。
モデルサイズの問題なのか、repetition_penaltyを1.15~1.25くらいにしないと早めに繰り返しが始まります。
詳細は[こちら](https://zenn.dev/kendama/articles/55564e12da6e82)をご覧ください。
## データセット
- [kunishou/databricks-dolly-15k-ja](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)
- [Kendamarron/jimba-instuction-1k-beta](https://huggingface.co/datasets/Kendamarron/jimba-instuction-1k-beta)
- [Kendamarron/pret-a-porter-instruction-v0.1](https://huggingface.co/datasets/Kendamarron/pret-a-porter-instruction-v0.1)
- Kendamarron/jimba-oasst2-ja(公開準備中)
jimba-oasst2-jaはoasst2のinstructionを起点にSwallow-MXを使って作成したマルチターン合成データセットです。
## 名前について
日本の在来馬であるトカラ馬から
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda"
model = AutoModelForCausalLM.from_pretrained(
'Kendamarron/Tokara-0.5B-Chat-v0.1',
torch_dtype=torch.bfloat16,
device_map=device,
)
tokenizer = AutoTokenizer.from_pretrained('Kendamarron/Tokara-0.5B-Chat-dolly-jimba')
messages = [
{"role": "system", "content": "あなたは誠実で優秀な日本人のアシスタントです。"},
{"role": "user", "content": "野菜は体にいいですか?"}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=256,
do_sample=True,
top_p=0.95,
top_k=40,
temperature=0.7,
repetition_penalty=1.1,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=2
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
|
buelfhood/irc-opencv-opencv
|
buelfhood
| 2024-05-08T13:23:25Z | 1 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:IRC_Adapters",
"roberta",
"dataset:IRC",
"region:us"
] | null | 2024-05-08T13:23:22Z |
---
tags:
- adapter-transformers
- adapterhub:IRC_Adapters
- roberta
datasets:
- IRC
---
# Adapter `buelfhood/irc-opencv-opencv` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [IRC_Adapters](https://adapterhub.ml/explore/IRC_Adapters/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("buelfhood/irc-opencv-opencv", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
data-aces/Llama2-7B-FineTune-CT
|
data-aces
| 2024-05-08T13:21:07Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-04-02T05:27:47Z |
---
library_name: peft
base_model: NousResearch/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
h4g3n/multilingual-MiniLM-L12-de-en-es-fr-it-nl-pl-pt
|
h4g3n
| 2024-05-08T13:15:42Z | 204 | 2 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"de",
"en",
"es",
"fr",
"it",
"nl",
"pl",
"pt",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-05-07T17:43:10Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- de
- en
- es
- fr
- it
- nl
- pl
- pt
---
# h4g3n/multilingual-MiniLM-L12-de-en-es-fr-it-nl-pl-pt
The model is derived from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2).
It is a destilled variant with only 4 attention layers, that was tasked to keep the vector space aligned with 7 languages during destillation.
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('h4g3n/multilingual-MiniLM-L12-de-en-es-fr-it-nl-pl-pt')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('h4g3n/multilingual-MiniLM-L12-de-en-es-fr-it-nl-pl-pt')
model = AutoModel.from_pretrained('h4g3n/multilingual-MiniLM-L12-de-en-es-fr-it-nl-pl-pt')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=h4g3n/multilingual-MiniLM-L12-de-en-es-fr-it-nl-pl-pt)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 80592 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
buelfhood/irc-bitcoin-bitcoin
|
buelfhood
| 2024-05-08T13:15:19Z | 1 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:IRC_Adapters",
"roberta",
"dataset:IRC",
"region:us"
] | null | 2024-05-08T13:15:16Z |
---
tags:
- adapter-transformers
- adapterhub:IRC_Adapters
- roberta
datasets:
- IRC
---
# Adapter `buelfhood/irc-bitcoin-bitcoin` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [IRC_Adapters](https://adapterhub.ml/explore/IRC_Adapters/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("buelfhood/irc-bitcoin-bitcoin", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
Mag0g/Ezekiel25_18
|
Mag0g
| 2024-05-08T13:08:04Z | 131 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-08T11:52:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ellight/gemma-2b-bnb-gguf
|
Ellight
| 2024-05-08T13:04:28Z | 8 | 0 |
transformers
|
[
"transformers",
"gguf",
"gemma",
"text-generation-inference",
"unsloth",
"trl",
"text-generation",
"hi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-08T12:52:10Z |
---
license: apache-2.0
language:
- hi
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-7b-bnb-gguf
pipeline_tag: text-generation
---
# Uploaded model
- **Developed by:** Ellight
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# Hindi-Gemma-2B-instruct (Instruction-tuned)
Hindi-Gemma-2B-instruct is an instruction-tuned Hindi large language model (LLM) with 2 billion parameters, and it is based on Gemma 2B.
|
whalesnik/rut5-base-multitask-finetuned
|
whalesnik
| 2024-05-08T12:59:42Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:cointegrated/rut5-base-multitask",
"base_model:finetune:cointegrated/rut5-base-multitask",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-07T14:54:58Z |
---
license: mit
base_model: cointegrated/rut5-base-multitask
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: rut5-base-multitask-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rut5-base-multitask-finetuned
This model is a fine-tuned version of [cointegrated/rut5-base-multitask](https://huggingface.co/cointegrated/rut5-base-multitask) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8474
- Precision: [0.6929003596305847, 0.6133741736412048, 0.6369296908378601, 0.6576104164123535, 0.6581195592880249, 0.7057837843894958, 0.6906453371047974, 0.6295247077941895, 0.6032506227493286, 0.6801808476448059, 0.7086416482925415, 0.7715317010879517, 0.6965898871421814, 0.6664015054702759, 0.6693047285079956, 0.635832667350769, 0.7033517956733704, 0.6183526515960693, 0.6158597469329834, 0.6398658752441406, 0.6714366674423218, 0.6355723738670349, 0.6943302750587463, 0.6544321775436401, 0.6650948524475098, 0.7430660724639893, 0.6517236232757568, 0.7484419345855713, 0.0, 0.675719141960144, 0.6458128690719604, 0.6234389543533325, 0.6472302079200745, 0.6631240248680115, 0.6684064865112305, 0.6660122275352478, 0.6794019937515259, 0.6357454061508179, 0.6969460248947144, 0.5535992383956909, 0.7105198502540588, 0.6797340512275696, 0.5988055467605591, 0.67428058385849, 0.637294590473175, 0.6437563300132751, 0.7560627460479736, 0.6326324939727783, 0.7101137042045593, 0.7656958699226379, 0.6435254812240601, 0.6552819013595581, 0.6591821312904358, 0.6520820260047913, 0.7302312850952148, 0.6462039947509766, 0.6925369501113892, 0.6566042900085449, 0.695000171661377, 0.6302398443222046, 0.6504288911819458, 0.6807819604873657, 0.7568456530570984, 0.6529860496520996, 0.6187940239906311, 0.6020010709762573, 0.6868752837181091, 0.6300128102302551, 0.6296300888061523, 0.6843270063400269, 0.6368266344070435, 0.6201039552688599, 0.6535858511924744, 0.6613433957099915, 0.6574115753173828, 0.6770792007446289, 0.7600529193878174, 0.6565803289413452, 0.698392391204834, 0.6894751787185669, 0.6980857849121094, 0.7112835645675659, 0.668113112449646, 0.6581467986106873, 0.659957766532898, 0.6379042863845825, 0.6446895599365234, 0.6587048768997192, 0.652809739112854, 0.6526193618774414, 0.6537817716598511, 0.7116091847419739, 0.7003600597381592, 0.6960922479629517, 0.7021352052688599, 0.656990647315979, 0.6734673380851746, 0.5993854999542236, 0.6549721956253052, 0.7079775333404541, 0.7105451822280884, 0.673061728477478, 0.6398658752441406, 0.7279029488563538, 0.632804811000824, 0.6385118365287781, 0.6277474164962769, 0.694757878780365, 0.637184739112854, 0.6614517569541931, 0.673061728477478, 0.6937723159790039, 0.6959199905395508, 0.6462610960006714, 0.6720056533813477, 0.6341558694839478, 0.658045768737793, 0.6242990493774414, 0.6955231428146362, 0.6746744513511658, 0.6259623765945435, 0.6322343349456787, 0.6386721730232239, 0.6636806130409241, 0.7008563280105591, 0.7186553478240967, 0.742395281791687, 0.5930124521255493, 0.660872220993042, 0.6856606006622314, 0.7234644889831543, 0.661785364151001, 0.6491066217422485, 0.656990647315979, 0.6655586957931519, 0.6928428411483765, 0.6648601293563843, 0.6940321922302246, 0.6660351753234863, 0.673884391784668, 0.654864490032196, 0.6751419305801392, 0.7360048294067383, 0.6635060906410217, 0.6750009655952454, 0.6638166904449463, 0.6382589936256409, 0.7049083709716797, 0.6431376934051514, 0.6806449294090271, 0.652288556098938, 0.655288815498352, 0.6625323295593262, 0.6482013463973999, 0.6454973816871643, 0.6514257192611694, 0.6625323295593262, 0.627372682094574, 0.6609578132629395, 0.7002651691436768, 0.6899219155311584, 0.6566691398620605, 0.6862121224403381, 0.6149857640266418, 0.6542688608169556, 0.6522969603538513, 0.635832667350769, 0.6770160794258118, 0.6105413436889648, 0.5959399938583374, 0.5996183753013611, 0.6474558115005493, 0.6497341394424438, 0.6824425458908081, 0.6511133313179016, 0.6678274869918823, 0.6700403690338135, 0.6197404861450195, 0.6774248480796814, 0.748753011226654, 0.5847424864768982, 0.7290750741958618, 0.6362240314483643, 0.6821150779724121, 0.6743643879890442, 0.6179369688034058, 0.6248660087585449, 0.6989448070526123, 0.7265845537185669, 0.6207034587860107, 0.6517062187194824, 0.6871163249015808, 0.6199581623077393, 0.7777968645095825, 0.6263978481292725, 0.6609578132629395, 0.6415587663650513, 0.700570821762085, 0.657017171382904, 0.6716399788856506, 0.67547208070755, 0.6563198566436768, 0.7473193407058716, 0.6515363454818726, 0.7003060579299927, 0.650084376335144, 0.6724016666412354, 0.7201712131500244, 0.6008522510528564, 0.667036771774292, 0.6724625825881958, 0.7508677840232849, 0.6464987397193909, 0.6168344020843506, 0.6700884103775024, 0.6235026121139526, 0.6438812613487244, 0.6425885558128357, 0.6213839054107666, 0.6382965445518494, 0.6957192420959473, 0.6555189490318298, 0.6971650719642639, 0.7006597518920898, 0.7391626834869385, 0.6715911626815796, 0.6253701448440552, 0.652809739112854, 0.6811630725860596, 0.7285609245300293, 0.6595237255096436, 0.5911452770233154, 0.665739893913269, 0.6692743897438049, 0.6335341930389404, 0.6147409677505493, 0.6383231282234192, 0.6025363206863403, 0.6212208271026611, 0.6143321990966797, 0.6346527338027954, 0.6133826375007629, 0.5720107555389404, 0.7086416482925415, 0.6206754446029663, 0.7020885348320007, 0.6567215919494629, 0.6147409677505493, 0.6760094165802002, 0.6580663323402405, 0.6447792649269104, 0.6912583112716675, 0.7000113725662231, 0.6154043078422546, 0.640922486782074, 0.6739322543144226, 0.655249834060669, 0.640444278717041, 0.6667684316635132, 0.6733943819999695, 0.6508811712265015, 0.7618014812469482, 0.646928071975708]
- Recall: [0.6649062037467957, 0.6275476217269897, 0.6429741382598877, 0.6600346565246582, 0.6316436529159546, 0.6886480450630188, 0.6775014996528625, 0.6498724818229675, 0.572837233543396, 0.6461964845657349, 0.7162613868713379, 0.7794148325920105, 0.6915059685707092, 0.6392509937286377, 0.6521221995353699, 0.6284155249595642, 0.7132933139801025, 0.6361821293830872, 0.6224191784858704, 0.6523096561431885, 0.6769837737083435, 0.6243550777435303, 0.6266458034515381, 0.652681827545166, 0.6484395861625671, 0.7511179447174072, 0.653579831123352, 0.698567271232605, 0.0, 0.6781419515609741, 0.6337482333183289, 0.653876543045044, 0.6137877702713013, 0.6642259359359741, 0.7020760774612427, 0.634747326374054, 0.7031091451644897, 0.6552716493606567, 0.6741659641265869, 0.5695792436599731, 0.7114435434341431, 0.6849547624588013, 0.59352046251297, 0.6778455972671509, 0.5988288521766663, 0.6323497295379639, 0.7332395911216736, 0.6469723582267761, 0.6880676746368408, 0.7390843629837036, 0.5997792482376099, 0.6754531264305115, 0.6813293099403381, 0.6666642427444458, 0.68291175365448, 0.6306267976760864, 0.6752303838729858, 0.6781089901924133, 0.7067015171051025, 0.6026639938354492, 0.6560887694358826, 0.6354584693908691, 0.7168669104576111, 0.6409149169921875, 0.6049350500106812, 0.5960757732391357, 0.669973611831665, 0.6422098278999329, 0.6395115256309509, 0.6887341141700745, 0.6432803869247437, 0.6286009550094604, 0.6391538381576538, 0.6414529085159302, 0.6532565355300903, 0.6443188190460205, 0.7384698390960693, 0.6441894769668579, 0.7132408618927002, 0.7013686895370483, 0.6652677059173584, 0.6880629062652588, 0.6502605080604553, 0.6269145011901855, 0.6791725754737854, 0.5927161574363708, 0.6064770221710205, 0.661729097366333, 0.651779055595398, 0.6646026372909546, 0.632445216178894, 0.725436806678772, 0.6621347665786743, 0.6755839586257935, 0.6738181114196777, 0.6566475033760071, 0.6619706153869629, 0.5842774510383606, 0.686192512512207, 0.7009044885635376, 0.7354075908660889, 0.6788778305053711, 0.6523096561431885, 0.6869735717773438, 0.6263010501861572, 0.6256016492843628, 0.6423155069351196, 0.6860323548316956, 0.6310012340545654, 0.6260640621185303, 0.6788778305053711, 0.6427903771400452, 0.6636099815368652, 0.6568259596824646, 0.6437113881111145, 0.6226024627685547, 0.6720809936523438, 0.6328999996185303, 0.699752926826477, 0.636201798915863, 0.644474446773529, 0.6302847862243652, 0.6292503476142883, 0.6662095785140991, 0.7015441060066223, 0.7038581371307373, 0.7442997694015503, 0.6347897052764893, 0.6376725435256958, 0.6901686191558838, 0.695481538772583, 0.6469976902008057, 0.6211618185043335, 0.6566475033760071, 0.6544449329376221, 0.6582697629928589, 0.6453942656517029, 0.670850932598114, 0.655735969543457, 0.6552658081054688, 0.6520406603813171, 0.678108274936676, 0.7265080213546753, 0.6506229639053345, 0.6752663850784302, 0.6709432601928711, 0.6129387021064758, 0.6714939475059509, 0.6591033935546875, 0.634405791759491, 0.6656297445297241, 0.6381163001060486, 0.6431194543838501, 0.6446937322616577, 0.6875370740890503, 0.6325691938400269, 0.6431194543838501, 0.6313208341598511, 0.6314756870269775, 0.7084503769874573, 0.7146296501159668, 0.6485159993171692, 0.6599531173706055, 0.6230310797691345, 0.6420427560806274, 0.6593505144119263, 0.6284155249595642, 0.6823000907897949, 0.5802127122879028, 0.5710383057594299, 0.5828155279159546, 0.6076115369796753, 0.6196562051773071, 0.6699979901313782, 0.6352205276489258, 0.6752637624740601, 0.6526370644569397, 0.6168854236602783, 0.69868004322052, 0.7261859178543091, 0.5989089012145996, 0.7012088298797607, 0.5866293907165527, 0.7031817436218262, 0.6830926537513733, 0.6432316303253174, 0.6219719648361206, 0.6903827786445618, 0.6756868362426758, 0.6180089712142944, 0.6410292387008667, 0.689997673034668, 0.5841130614280701, 0.6860305070877075, 0.6241101026535034, 0.6314756870269775, 0.6247932314872742, 0.697077214717865, 0.6323742270469666, 0.6652247905731201, 0.6622908115386963, 0.6254100203514099, 0.7815357446670532, 0.6599846482276917, 0.6917272806167603, 0.6358695030212402, 0.6401605010032654, 0.6963860392570496, 0.5883089900016785, 0.6543786525726318, 0.6658977270126343, 0.7610270977020264, 0.6357758045196533, 0.6123616099357605, 0.698261022567749, 0.6122673749923706, 0.6471544504165649, 0.6635059118270874, 0.6274517178535461, 0.6202007532119751, 0.6825913190841675, 0.680517315864563, 0.6648703813552856, 0.6754068732261658, 0.6989795565605164, 0.6868336200714111, 0.6108724474906921, 0.651779055595398, 0.6650040149688721, 0.7022161483764648, 0.6511324644088745, 0.616610050201416, 0.6444504261016846, 0.6764492988586426, 0.6493207216262817, 0.6018832921981812, 0.6349364519119263, 0.6164268851280212, 0.618169367313385, 0.6206363439559937, 0.6173461079597473, 0.589404821395874, 0.5812571048736572, 0.7162613868713379, 0.6146183013916016, 0.677343487739563, 0.7096463441848755, 0.6018832921981812, 0.6628992557525635, 0.6039726734161377, 0.6497476100921631, 0.6936290264129639, 0.7000452280044556, 0.5954415798187256, 0.6570545434951782, 0.6340285539627075, 0.6425281763076782, 0.6396463513374329, 0.6707313656806946, 0.6983391046524048, 0.6416543126106262, 0.7060319185256958, 0.6359407305717468]
- F1: [0.6786146759986877, 0.6203799843788147, 0.6399376392364502, 0.6588203310966492, 0.6446098685264587, 0.697110652923584, 0.6840102076530457, 0.6395367383956909, 0.587650716304779, 0.6627532243728638, 0.7124311327934265, 0.7754532694816589, 0.6940386295318604, 0.6525439620018005, 0.660601794719696, 0.632102370262146, 0.7082876563072205, 0.6271407008171082, 0.6191221475601196, 0.6460278034210205, 0.6741988658905029, 0.6299137473106384, 0.6587540507316589, 0.6535558700561523, 0.6566615700721741, 0.7470703125, 0.6526504158973694, 0.722645103931427, 0.0, 0.6769284009933472, 0.6397237181663513, 0.6382951140403748, 0.6300655007362366, 0.663674533367157, 0.6848276853561401, 0.6500040292739868, 0.6910523176193237, 0.6453608870506287, 0.6853667497634888, 0.5614755749702454, 0.7109813690185547, 0.6823344230651855, 0.5961512327194214, 0.6760584115982056, 0.617463231086731, 0.6380019783973694, 0.7444761991500854, 0.6397220492362976, 0.6989168524742126, 0.7521547675132751, 0.6208827495574951, 0.6652146577835083, 0.6700727939605713, 0.659292459487915, 0.7057793140411377, 0.6383203864097595, 0.6837741732597351, 0.6671833992004395, 0.7008020281791687, 0.616143524646759, 0.6532465815544128, 0.657339870929718, 0.7363139986991882, 0.6468941569328308, 0.6117860078811646, 0.5990237593650818, 0.6783191561698914, 0.6360528469085693, 0.6345322728157043, 0.6865234375, 0.6400372385978699, 0.6243235468864441, 0.6462892889976501, 0.6512463092803955, 0.6553274989128113, 0.6602928638458252, 0.7491059303283691, 0.6503258943557739, 0.7057384848594666, 0.6953710913658142, 0.6812817454338074, 0.6994805932044983, 0.6590659022331238, 0.6421510577201843, 0.6694273352622986, 0.6144805550575256, 0.6249997615814209, 0.6602135300636292, 0.6522939801216125, 0.6585564613342285, 0.6429365277290344, 0.7184564471244812, 0.6807112097740173, 0.6856848001480103, 0.6876852512359619, 0.6568189859390259, 0.6676694750785828, 0.5917350649833679, 0.6702190041542053, 0.7044232487678528, 0.7227626442909241, 0.6759572625160217, 0.6460278034210205, 0.7068462371826172, 0.6295360922813416, 0.6319908499717712, 0.6349478960037231, 0.690367579460144, 0.6340779066085815, 0.6432715654373169, 0.6759572625160217, 0.667309045791626, 0.6793810129165649, 0.6515007019042969, 0.6575543284416199, 0.6283260583877563, 0.6649893522262573, 0.6285700798034668, 0.6976316571235657, 0.6548735499382019, 0.6350835561752319, 0.6312580704689026, 0.6339262127876282, 0.6649426817893982, 0.701200008392334, 0.711179792881012, 0.7433462738990784, 0.6131902933120728, 0.6490651369094849, 0.6879072189331055, 0.7091970443725586, 0.6543079614639282, 0.6348268389701843, 0.6568189859390259, 0.6599550247192383, 0.6751139760017395, 0.6549826264381409, 0.6822446584701538, 0.6608454585075378, 0.6644446849822998, 0.653449535369873, 0.6766218543052673, 0.7312255501747131, 0.6570013761520386, 0.6751335859298706, 0.6673609614372253, 0.625342607498169, 0.6877955198287964, 0.6510226726531982, 0.6567124128341675, 0.6588916182518005, 0.6465886235237122, 0.6526815891265869, 0.6464427709579468, 0.665854275226593, 0.6418589949607849, 0.6526815891265869, 0.6293405890464783, 0.6458804607391357, 0.704334020614624, 0.7020584940910339, 0.6525671482086182, 0.6728265285491943, 0.6189823150634766, 0.6480981707572937, 0.6558048129081726, 0.632102370262146, 0.6796478629112244, 0.5949907898902893, 0.5832234025001526, 0.5910975933074951, 0.6269012093544006, 0.6343388557434082, 0.6761629581451416, 0.643068790435791, 0.6715250611305237, 0.6612242460250854, 0.6183096766471863, 0.6878883242607117, 0.7372968792915344, 0.5917409062385559, 0.7148704528808594, 0.6104210019111633, 0.6924881935119629, 0.6787004470825195, 0.6303306818008423, 0.6234156489372253, 0.6946374177932739, 0.7002120018005371, 0.6193532943725586, 0.6463236212730408, 0.6885539293289185, 0.6015020608901978, 0.729037344455719, 0.6252518892288208, 0.6458804607391357, 0.6330650448799133, 0.698819637298584, 0.6444602012634277, 0.6684170365333557, 0.6688165664672852, 0.6404922008514404, 0.7640447020530701, 0.6557332873344421, 0.695990264415741, 0.6428983807563782, 0.6558851003646851, 0.708078920841217, 0.5945144295692444, 0.6606470942497253, 0.6691640615463257, 0.7559133172035217, 0.6410924792289734, 0.6145898699760437, 0.6838846802711487, 0.6178339123725891, 0.6455137133598328, 0.6528797745704651, 0.6244031190872192, 0.6291185617446899, 0.689092755317688, 0.6677843332290649, 0.6806347966194153, 0.6878015995025635, 0.7185097336769104, 0.6791268587112427, 0.6180363297462463, 0.6522939801216125, 0.6729865670204163, 0.7151459455490112, 0.6553012132644653, 0.6036092042922974, 0.6549221873283386, 0.6728427410125732, 0.6413303017616272, 0.6082441806793213, 0.6366252899169922, 0.6094024777412415, 0.6196913719177246, 0.6174681782722473, 0.6258797645568848, 0.6011547446250916, 0.5765968561172485, 0.7124311327934265, 0.6176320314407349, 0.6894940733909607, 0.6821589469909668, 0.6082441806793213, 0.6693901419639587, 0.6298602223396301, 0.6472539305686951, 0.6924416422843933, 0.7000283002853394, 0.6052583456039429, 0.6488882899284363, 0.6533717513084412, 0.6488266587257385, 0.6400451064109802, 0.6687440276145935, 0.6856399774551392, 0.6462348103523254, 0.7328572273254395, 0.6413873434066772]
- Hashcode: bert-base-multilingual-cased_L9_no-idf_version=0.3.12(hug_trans=4.40.1)
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Hashcode |
|:-------------:|:-----:|:----:|:---------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------:|
| No log | 1.0 | 187 | 3.0813 | [0.6972900629043579, 0.6209128499031067, 0.6303497552871704, 0.669329822063446, 0.6658897399902344, 0.6546947956085205, 0.6485821008682251, 0.6306471228599548, 0.6639196276664734, 0.6565954685211182, 0.6291463971138, 0.7196910381317139, 0.7252302169799805, 0.6341898441314697, 0.6808257102966309, 0.6202186942100525, 0.7196373343467712, 0.6396815776824951, 0.6539888978004456, 0.6446966528892517, 0.6903380751609802, 0.5996436476707458, 0.6943303346633911, 0.6544321775436401, 0.6647326350212097, 0.7383794784545898, 0.6740213632583618, 0.7063751816749573, 0.0, 0.6171443462371826, 0.6343944072723389, 0.6372663974761963, 0.5850397348403931, 0.6631240248680115, 0.6684064865112305, 0.6345014572143555, 0.6725552082061768, 0.6347571015357971, 0.6768847107887268, 0.5378965139389038, 0.6844617128372192, 0.6753311157226562, 0.5806505680084229, 0.6488094925880432, 0.6372946500778198, 0.6392971277236938, 0.6965267658233643, 0.6362487077713013, 0.6423338651657104, 0.6845363974571228, 0.6452834606170654, 0.6700187921524048, 0.6245315074920654, 0.6238488554954529, 0.6310935616493225, 0.6237497925758362, 0.6758069396018982, 0.6643972992897034, 0.6658114790916443, 0.6302398443222046, 0.6652647256851196, 0.713638424873352, 0.7136977910995483, 0.6789830923080444, 0.6303218603134155, 0.6199545860290527, 0.7059713006019592, 0.6632803678512573, 0.6112661361694336, 0.6625752449035645, 0.6524497270584106, 0.6201039552688599, 0.6649121046066284, 0.684431254863739, 0.6731460094451904, 0.6547147631645203, 0.7210156321525574, 0.7101010680198669, 0.6843844652175903, 0.7465802431106567, 0.6744540929794312, 0.6887200474739075, 0.6512157917022705, 0.6507437229156494, 0.6838808655738831, 0.6683865785598755, 0.6446895599365234, 0.6501699686050415, 0.6460182070732117, 0.680347204208374, 0.6514892578125, 0.6723964214324951, 0.6863370537757874, 0.6522715091705322, 0.6980904936790466, 0.6214569807052612, 0.6801367998123169, 0.6006804704666138, 0.6600107550621033, 0.6871361136436462, 0.7105451822280884, 0.7183694243431091, 0.6446966528892517, 0.6824240684509277, 0.6309765577316284, 0.6360318660736084, 0.6537045240402222, 0.6286071538925171, 0.6214569807052612, 0.6449944972991943, 0.7183694243431091, 0.7784961462020874, 0.7116344571113586, 0.6393237709999084, 0.6527358293533325, 0.6892797946929932, 0.6342389583587646, 0.6182838678359985, 0.6402065753936768, 0.6216378808021545, 0.625435471534729, 0.6627340316772461, 0.6374369263648987, 0.6772816181182861, 0.6666780710220337, 0.6945760250091553, 0.7083110213279724, 0.7192458510398865, 0.6739559173583984, 0.6938768625259399, 0.6879047751426697, 0.6425862312316895, 0.6523555517196655, 0.6214569807052612, 0.6910986304283142, 0.6493822336196899, 0.7008022665977478, 0.7025861740112305, 0.6851780414581299, 0.6230441927909851, 0.6275979280471802, 0.6768426895141602, 0.7166420221328735, 0.6342509388923645, 0.6577139496803284, 0.6748143434524536, 0.6382589340209961, 0.6494941711425781, 0.6505168676376343, 0.6350724697113037, 0.6540824174880981, 0.6555485129356384, 0.638446033000946, 0.6482013463973999, 0.6286321878433228, 0.6481608748435974, 0.638446033000946, 0.5979923009872437, 0.6979027986526489, 0.6817257404327393, 0.6592428088188171, 0.6616016626358032, 0.6915000677108765, 0.6374605894088745, 0.6542688608169556, 0.6730923056602478, 0.6202186942100525, 0.6743776202201843, 0.6270588636398315, 0.6077086925506592, 0.6462111473083496, 0.6474558115005493, 0.6506569385528564, 0.6693702340126038, 0.6248153448104858, 0.698747992515564, 0.627697229385376, 0.6325319409370422, 0.7074161767959595, 0.7149229645729065, 0.6152138710021973, 0.7022101879119873, 0.6184785962104797, 0.6569041013717651, 0.6733340620994568, 0.6296130418777466, 0.5978628396987915, 0.7141454219818115, 0.6680266261100769, 0.6287610530853271, 0.6218060255050659, 0.6585436463356018, 0.5934261679649353, 0.8017036318778992, 0.621709406375885, 0.6979027986526489, 0.6304135918617249, 0.6718711853027344, 0.6699690222740173, 0.6498791575431824, 0.6845303177833557, 0.6299594640731812, 0.7473193407058716, 0.7256643176078796, 0.6972105503082275, 0.6613482236862183, 0.6441901326179504, 0.7391979694366455, 0.5977689027786255, 0.6539319753646851, 0.670433521270752, 0.7508677840232849, 0.6992871165275574, 0.615271270275116, 0.711364209651947, 0.6618025302886963, 0.6987183094024658, 0.5896515250205994, 0.6441870927810669, 0.628298282623291, 0.7039917707443237, 0.6637365818023682, 0.6517009139060974, 0.6637771129608154, 0.6967252492904663, 0.6921741962432861, 0.6884433031082153, 0.6460182070732117, 0.660264790058136, 0.7167667150497437, 0.6615840196609497, 0.5911452770233154, 0.6719667911529541, 0.69354647397995, 0.6477620601654053, 0.6344354152679443, 0.6017006635665894, 0.6884675621986389, 0.6212208271026611, 0.6053494215011597, 0.6280270218849182, 0.5599168539047241, 0.6021941900253296, 0.6291463971138, 0.6510751247406006, 0.6379233598709106, 0.6634538173675537, 0.6344354152679443, 0.6317392587661743, 0.6159518957138062, 0.6349514722824097, 0.6798754930496216, 0.6915603280067444, 0.6154043674468994, 0.6409224271774292, 0.6296124458312988, 0.7049307823181152, 0.6629446744918823, 0.6483598947525024, 0.6874243021011353, 0.6508811712265015, 0.7231448888778687, 0.6139471530914307] | [0.6690835356712341, 0.629605770111084, 0.6341222524642944, 0.6762160062789917, 0.6402171850204468, 0.6802598237991333, 0.6586434841156006, 0.6438072919845581, 0.630736231803894, 0.6443355679512024, 0.6071645021438599, 0.7338281273841858, 0.6963890790939331, 0.5534967184066772, 0.6605989933013916, 0.5930925607681274, 0.6938921213150024, 0.649103581905365, 0.5964820384979248, 0.63885498046875, 0.6794215440750122, 0.5904185175895691, 0.6266457438468933, 0.652681827545166, 0.6445105075836182, 0.7305891513824463, 0.673649251461029, 0.693747878074646, 0.0, 0.6254042387008667, 0.6392340064048767, 0.6495228409767151, 0.6155095100402832, 0.6642259359359741, 0.7020760774612427, 0.5918160676956177, 0.6547914743423462, 0.6299182176589966, 0.6921340227127075, 0.5744519233703613, 0.6615071296691895, 0.6916912794113159, 0.5551225543022156, 0.615251362323761, 0.5988288521766663, 0.6381624937057495, 0.6787834763526917, 0.6415231823921204, 0.640038251876831, 0.6688621044158936, 0.6177853941917419, 0.6892616748809814, 0.6533267498016357, 0.6239284873008728, 0.6494619250297546, 0.622978925704956, 0.658489465713501, 0.6747301816940308, 0.6713486909866333, 0.6026639938354492, 0.6633857488632202, 0.6406348943710327, 0.6881794929504395, 0.6792259812355042, 0.6151540279388428, 0.559799313545227, 0.6993117332458496, 0.6552832126617432, 0.6277163028717041, 0.6756231784820557, 0.6494308710098267, 0.6286009550094604, 0.6512190103530884, 0.6716654300689697, 0.6703565716743469, 0.637775719165802, 0.7183399200439453, 0.685828447341919, 0.6578220725059509, 0.7307789325714111, 0.6505950093269348, 0.6700854301452637, 0.6372937560081482, 0.6552339792251587, 0.6840033531188965, 0.6275448799133301, 0.6064770221710205, 0.6499481201171875, 0.6305532455444336, 0.6810789108276367, 0.6343933343887329, 0.686200737953186, 0.6534963846206665, 0.6271517872810364, 0.6782116293907166, 0.5916875600814819, 0.6401302814483643, 0.5829842686653137, 0.7125808596611023, 0.6689456701278687, 0.7354074716567993, 0.7031968832015991, 0.63885498046875, 0.6724204421043396, 0.6183298826217651, 0.6375395655632019, 0.6543038487434387, 0.5749609470367432, 0.5953624248504639, 0.6092445850372314, 0.7031968832015991, 0.7098925113677979, 0.7054386138916016, 0.6289387941360474, 0.6202157735824585, 0.674200177192688, 0.6449214816093445, 0.6415404677391052, 0.6265232563018799, 0.6098505258560181, 0.613018810749054, 0.6582919955253601, 0.6342277526855469, 0.6146944761276245, 0.6666728258132935, 0.6904858350753784, 0.7162154912948608, 0.6697985529899597, 0.6468008756637573, 0.7186888456344604, 0.6971940398216248, 0.6463102102279663, 0.6626359820365906, 0.5916875600814819, 0.6631262302398682, 0.6337734460830688, 0.6653444766998291, 0.6642656922340393, 0.6637084484100342, 0.6216070055961609, 0.6486092805862427, 0.6868037581443787, 0.7214236259460449, 0.6438239812850952, 0.6478031277656555, 0.6667642593383789, 0.6129387617111206, 0.6363632678985596, 0.6672195792198181, 0.5527236461639404, 0.6664568781852722, 0.6251285076141357, 0.6258903741836548, 0.6446937322616577, 0.6885465383529663, 0.5943767428398132, 0.6258903741836548, 0.6287720799446106, 0.6805816888809204, 0.6964499950408936, 0.674129068851471, 0.6525664925575256, 0.6801928281784058, 0.6457400321960449, 0.6420427560806274, 0.6826196908950806, 0.5930925607681274, 0.6613972187042236, 0.5700592994689941, 0.5935043096542358, 0.6026366353034973, 0.6076114773750305, 0.630884051322937, 0.6460962295532227, 0.632949948310852, 0.7027755975723267, 0.6372056007385254, 0.6244736909866333, 0.7136611938476562, 0.7079389095306396, 0.6307814717292786, 0.6787548065185547, 0.571983814239502, 0.6612218618392944, 0.6679160594940186, 0.6268075704574585, 0.5937477946281433, 0.6996750831604004, 0.6586851477622986, 0.6132180094718933, 0.6217329502105713, 0.6748089790344238, 0.557576596736908, 0.7347369194030762, 0.6061442494392395, 0.6805816888809204, 0.6291359663009644, 0.6850269436836243, 0.6686764359474182, 0.6324300169944763, 0.6662142276763916, 0.6492703557014465, 0.7815357446670532, 0.6951200366020203, 0.6968278884887695, 0.6406024694442749, 0.6368361711502075, 0.7244458794593811, 0.5749488472938538, 0.6213769912719727, 0.6641060709953308, 0.7610270977020264, 0.6905292272567749, 0.6233813762664795, 0.7417882084846497, 0.6603812575340271, 0.7283397912979126, 0.6170424222946167, 0.6512232422828674, 0.6110985279083252, 0.6823626160621643, 0.6558948159217834, 0.643169105052948, 0.6609585881233215, 0.6937960386276245, 0.7087029814720154, 0.6332069635391235, 0.6305532455444336, 0.6450417041778564, 0.6955016851425171, 0.6579762101173401, 0.6166101098060608, 0.6587419509887695, 0.696763277053833, 0.6544588208198547, 0.624350368976593, 0.6007038950920105, 0.7098914384841919, 0.618169367313385, 0.6071577072143555, 0.5965011119842529, 0.5594587326049805, 0.620894730091095, 0.6071645021438599, 0.6472498178482056, 0.6284576654434204, 0.7020343542098999, 0.624350368976593, 0.629782497882843, 0.5875484943389893, 0.6373080015182495, 0.6910693049430847, 0.6707531213760376, 0.5954415798187256, 0.6570544242858887, 0.6142326593399048, 0.6763747930526733, 0.6701680421829224, 0.6335932016372681, 0.6942548751831055, 0.6416543126106262, 0.6935495138168335, 0.6191830635070801] | [0.6828957200050354, 0.6252291202545166, 0.6322304010391235, 0.6727553009986877, 0.6528011560440063, 0.6672325134277344, 0.6535740494728088, 0.6371592879295349, 0.6469026803970337, 0.6504077911376953, 0.6179599761962891, 0.7266908288002014, 0.7105170488357544, 0.5911020636558533, 0.6705598831176758, 0.6063523888587952, 0.7065302729606628, 0.6443580985069275, 0.6239131689071655, 0.6417624950408936, 0.6848363280296326, 0.5949953198432922, 0.6587540507316589, 0.6535558700561523, 0.65446537733078, 0.734463632106781, 0.673835277557373, 0.7000045776367188, 0.0, 0.6212468147277832, 0.63680499792099, 0.643336296081543, 0.5998879671096802, 0.663674533367157, 0.6848276853561401, 0.6124158501625061, 0.6635544300079346, 0.6323283910751343, 0.6844244003295898, 0.555573582649231, 0.672788679599762, 0.6834133267402649, 0.5675996541976929, 0.6315849423408508, 0.617463231086731, 0.6387292742729187, 0.6875407099723816, 0.6388750076293945, 0.6411839723587036, 0.6766085028648376, 0.6312350630760193, 0.6795040369033813, 0.6386047005653381, 0.6238886713981628, 0.640146017074585, 0.6233641505241394, 0.6670358180999756, 0.6695238351821899, 0.6685686111450195, 0.616143524646759, 0.6643238663673401, 0.6751689910888672, 0.700706422328949, 0.6791045069694519, 0.6226455569267273, 0.5883433222770691, 0.7026257514953613, 0.6592575311660767, 0.6193820238113403, 0.6690356135368347, 0.6509367823600769, 0.6243235468864441, 0.6579943299293518, 0.6779882311820984, 0.6717483997344971, 0.6461342573165894, 0.7196753025054932, 0.6977536678314209, 0.6708404421806335, 0.7385950684547424, 0.6623097062110901, 0.6792749762535095, 0.6441795229911804, 0.6529811024665833, 0.6839421391487122, 0.6473221182823181, 0.6249997615814209, 0.6500590443611145, 0.6381919980049133, 0.680712878704071, 0.6428276896476746, 0.6792284846305847, 0.6695142388343811, 0.6394650936126709, 0.6880075335502625, 0.606207013130188, 0.6595274209976196, 0.5917000770568848, 0.6852890849113464, 0.6779189109802246, 0.7227625250816345, 0.7107022404670715, 0.6417624950408936, 0.6773852705955505, 0.624589204788208, 0.6367847919464111, 0.6540040373802185, 0.6005885004997253, 0.6081299185752869, 0.6266100406646729, 0.7107022404670715, 0.7426133155822754, 0.7085229754447937, 0.6340887546539307, 0.6360604166984558, 0.6816565990447998, 0.6395356059074402, 0.6296975612640381, 0.6332910060882568, 0.6156877875328064, 0.6191648840904236, 0.6605055332183838, 0.6358283162117004, 0.6444721221923828, 0.6666754484176636, 0.6925249099731445, 0.7122412919998169, 0.6936420798301697, 0.6600992679595947, 0.7060649394989014, 0.6925182342529297, 0.6444428563117981, 0.6574556231498718, 0.606207013130188, 0.6768234968185425, 0.6414828896522522, 0.6826131939888, 0.6828888058662415, 0.6742724180221558, 0.6223247647285461, 0.6379306316375732, 0.681786835193634, 0.7190248370170593, 0.6390016078948975, 0.6527209281921387, 0.6707651615142822, 0.6253426671028137, 0.6428616642951965, 0.6587623953819275, 0.5910434722900391, 0.6602117419242859, 0.6399771571159363, 0.6321058869361877, 0.6464427709579468, 0.6572267413139343, 0.6201047897338867, 0.6321058869361877, 0.612995982170105, 0.6891334652900696, 0.6890091896057129, 0.6666028499603271, 0.6570529937744141, 0.6857998371124268, 0.6415736079216003, 0.6480981707572937, 0.6778225898742676, 0.6063523888587952, 0.6678243279457092, 0.5972021222114563, 0.600522518157959, 0.6236637234687805, 0.6269011497497559, 0.6406179666519165, 0.6575273871421814, 0.628856360912323, 0.7007560133934021, 0.6324157118797302, 0.6284769773483276, 0.7105249166488647, 0.7114138007164001, 0.6229004859924316, 0.6902832984924316, 0.5943232774734497, 0.6590558886528015, 0.6706140637397766, 0.6282072067260742, 0.5957982540130615, 0.7068362236022949, 0.6633229851722717, 0.6208922266960144, 0.6217694878578186, 0.6665771007537842, 0.5749431252479553, 0.7667608261108398, 0.6138281226158142, 0.6891334652900696, 0.6297740936279297, 0.678385317325592, 0.66932213306427, 0.6410358548164368, 0.6752481460571289, 0.6394691467285156, 0.7640447020530701, 0.7100638151168823, 0.6970191597938538, 0.6508100628852844, 0.6404920220375061, 0.7317476272583008, 0.5861368179321289, 0.6372389793395996, 0.6672547459602356, 0.7559133172035217, 0.6948806047439575, 0.6192997097969055, 0.7262577414512634, 0.6610910892486572, 0.7132216095924377, 0.6030361652374268, 0.6476860642433167, 0.619579017162323, 0.6930084228515625, 0.659792423248291, 0.6474069356918335, 0.6623649001121521, 0.6952575445175171, 0.7003410458564758, 0.659670889377594, 0.6381919980049133, 0.6525644659996033, 0.7059741020202637, 0.6597751379013062, 0.6036092638969421, 0.6652886271476746, 0.695151150226593, 0.6510932445526123, 0.6293525695800781, 0.6012018918991089, 0.6990153789520264, 0.6196913719177246, 0.6062521934509277, 0.6118583083152771, 0.5596876740455627, 0.6114014387130737, 0.6179599761962891, 0.6491568684577942, 0.633155107498169, 0.6821990609169006, 0.6293525695800781, 0.6307592988014221, 0.6014150381088257, 0.6361275911331177, 0.6854267716407776, 0.6809977889060974, 0.6052584052085876, 0.6488881707191467, 0.6218274831771851, 0.6903576254844666, 0.6665367484092712, 0.6408914923667908, 0.6908227205276489, 0.6462348103523254, 0.7080380916595459, 0.6165539622306824] | bert-base-multilingual-cased_L9_no-idf_version=0.3.12(hug_trans=4.40.1) |
| No log | 2.0 | 374 | 2.9486 | [0.6982066631317139, 0.6183862686157227, 0.6087684035301208, 0.6558735370635986, 0.6528643369674683, 0.7002508640289307, 0.6485821008682251, 0.6174837350845337, 0.6353263258934021, 0.6565954685211182, 0.6998130083084106, 0.7471818923950195, 0.7252302169799805, 0.6664015054702759, 0.688916802406311, 0.6202186942100525, 0.7203189134597778, 0.6141666769981384, 0.6513712406158447, 0.6446966528892517, 0.6903380155563354, 0.6406527161598206, 0.6943302154541016, 0.6544321775436401, 0.6787152290344238, 0.7558969259262085, 0.664342999458313, 0.6777578592300415, 0.0, 0.6164848804473877, 0.6422699689865112, 0.6504508256912231, 0.6472301483154297, 0.6631240248680115, 0.6684065461158752, 0.6660122275352478, 0.6472294330596924, 0.637406587600708, 0.6768847107887268, 0.5683413743972778, 0.6507012248039246, 0.6656262278556824, 0.5883523225784302, 0.6488094925880432, 0.6372946500778198, 0.6437563300132751, 0.7108484506607056, 0.6326324939727783, 0.6423338651657104, 0.6786177158355713, 0.6435254812240601, 0.6471881866455078, 0.6856778860092163, 0.6449967622756958, 0.7053937911987305, 0.6462039947509766, 0.6925369501113892, 0.6472506523132324, 0.680228054523468, 0.6302398443222046, 0.6588068008422852, 0.704545259475708, 0.7441930174827576, 0.6529860496520996, 0.6441495418548584, 0.6020010709762573, 0.7059713006019592, 0.6560994386672974, 0.6296300888061523, 0.6973000168800354, 0.6632401943206787, 0.6201039552688599, 0.6902400255203247, 0.6613433957099915, 0.6381564140319824, 0.6402202844619751, 0.7348482608795166, 0.6687710285186768, 0.6698120832443237, 0.6612139344215393, 0.6713679432868958, 0.7112835645675659, 0.6630656719207764, 0.6488121151924133, 0.659957766532898, 0.6379042863845825, 0.6446895599365234, 0.6501699686050415, 0.6785218715667725, 0.6541853547096252, 0.6588089466094971, 0.7166287302970886, 0.6940929889678955, 0.6882997751235962, 0.7072522640228271, 0.665823221206665, 0.6638649702072144, 0.5866219401359558, 0.6549721956253052, 0.6724978685379028, 0.7105451822280884, 0.6562262773513794, 0.6446966528892517, 0.7221744656562805, 0.6309765577316284, 0.6385118365287781, 0.631375253200531, 0.6286071538925171, 0.6371846199035645, 0.6449944972991943, 0.6562262773513794, 0.7741892337799072, 0.7135193347930908, 0.624544084072113, 0.6266262531280518, 0.6193041801452637, 0.658045768737793, 0.639914333820343, 0.6467216610908508, 0.6434848308563232, 0.6241222620010376, 0.6361654996871948, 0.6386721730232239, 0.6772816777229309, 0.7008563280105591, 0.7186553478240967, 0.742395281791687, 0.6759191751480103, 0.660872220993042, 0.6938768625259399, 0.6977353096008301, 0.6675675511360168, 0.6195050477981567, 0.665823221206665, 0.7274277806282043, 0.6900506615638733, 0.675679624080658, 0.6940321922302246, 0.6851780414581299, 0.6646198630332947, 0.6001584529876709, 0.65864497423172, 0.700131893157959, 0.6635060906410217, 0.6750009655952454, 0.6638168096542358, 0.6382589340209961, 0.7049083113670349, 0.6066895127296448, 0.6806449294090271, 0.6521055698394775, 0.655288815498352, 0.6625323295593262, 0.6482013463973999, 0.6719499826431274, 0.6606333255767822, 0.6625323295593262, 0.6231503486633301, 0.6720799207687378, 0.6998441219329834, 0.6946382522583008, 0.6616016626358032, 0.6533825397491455, 0.6632941961288452, 0.6542688608169556, 0.7271872758865356, 0.6202186942100525, 0.66164630651474, 0.6105413436889648, 0.6091306209564209, 0.5996183156967163, 0.6474558115005493, 0.6506568193435669, 0.6546282768249512, 0.6214566230773926, 0.7176573276519775, 0.6651043891906738, 0.6197404861450195, 0.6774248480796814, 0.7084785103797913, 0.586689829826355, 0.7290751934051514, 0.6184787154197693, 0.6817128658294678, 0.6754533052444458, 0.6538910865783691, 0.6248659491539001, 0.6896898746490479, 0.6734293699264526, 0.6391358971595764, 0.6430469751358032, 0.6700241565704346, 0.6199581623077393, 0.7463916540145874, 0.6518190503120422, 0.6720799207687378, 0.607273280620575, 0.7160269618034363, 0.6338986754417419, 0.6482559442520142, 0.6845303177833557, 0.6641334295272827, 0.7473193407058716, 0.6643524169921875, 0.7148523926734924, 0.6705331802368164, 0.6883808970451355, 0.7370448708534241, 0.5946321487426758, 0.6670366525650024, 0.6887495517730713, 0.7508677840232849, 0.6079212427139282, 0.6250202655792236, 0.7113643288612366, 0.6755977869033813, 0.6711742877960205, 0.6364812850952148, 0.6155573129653931, 0.6382966041564941, 0.6861942410469055, 0.6637364029884338, 0.6469249725341797, 0.6579157114028931, 0.6948999166488647, 0.6442856788635254, 0.6535832285881042, 0.6785218715667725, 0.6481752395629883, 0.7167667150497437, 0.6595237255096436, 0.5911452770233154, 0.640205979347229, 0.69354647397995, 0.6477620601654053, 0.6147409677505493, 0.6529644727706909, 0.6172745227813721, 0.6212208271026611, 0.6143321990966797, 0.6280270218849182, 0.6233522295951843, 0.5465691685676575, 0.6998130083084106, 0.6191197633743286, 0.6408892869949341, 0.6305669546127319, 0.6147409677505493, 0.6733752489089966, 0.6159518957138062, 0.6493899822235107, 0.666703999042511, 0.638370931148529, 0.6154043674468994, 0.640922486782074, 0.6389703154563904, 0.655249834060669, 0.640444278717041, 0.7016367316246033, 0.6745030879974365, 0.6508811712265015, 0.7618014812469482, 0.6073929071426392] | [0.6664232611656189, 0.6292112469673157, 0.6338019371032715, 0.6434659957885742, 0.6288933157920837, 0.6973171234130859, 0.6586434841156006, 0.6530244946479797, 0.6279247403144836, 0.6443355679512024, 0.699359655380249, 0.7634284496307373, 0.6963890790939331, 0.6392509937286377, 0.6810783743858337, 0.5930925607681274, 0.7025052309036255, 0.6085672378540039, 0.6430882215499878, 0.63885498046875, 0.6794215440750122, 0.6277364492416382, 0.6266457438468933, 0.652681827545166, 0.6630316972732544, 0.7588268518447876, 0.6581995487213135, 0.6952032446861267, 0.0, 0.6386038064956665, 0.6262978911399841, 0.6498764157295227, 0.6137877106666565, 0.6642259359359741, 0.7020760774612427, 0.634747326374054, 0.651517927646637, 0.6486392021179199, 0.6921340227127075, 0.5788134336471558, 0.6346472501754761, 0.7063138484954834, 0.5682485103607178, 0.6152513027191162, 0.5988288521766663, 0.6323497295379639, 0.7142684459686279, 0.6469723582267761, 0.640038251876831, 0.6532570123672485, 0.5997792482376099, 0.677040159702301, 0.6660094261169434, 0.6820929050445557, 0.6698746085166931, 0.6306266784667969, 0.6752302646636963, 0.6691311597824097, 0.6974778175354004, 0.6026639938354492, 0.662247896194458, 0.6558896899223328, 0.7120248079299927, 0.6409149169921875, 0.6333125829696655, 0.5960757732391357, 0.6993117332458496, 0.6552859544754028, 0.6395115256309509, 0.6917214393615723, 0.6749719381332397, 0.6286009550094604, 0.6715764999389648, 0.6414529085159302, 0.6576831340789795, 0.6087600588798523, 0.74678635597229, 0.6473836898803711, 0.6791526675224304, 0.6926620006561279, 0.6683577299118042, 0.6880629062652588, 0.6429443359375, 0.6168287992477417, 0.6791725754737854, 0.5927161574363708, 0.6064770221710205, 0.6499481201171875, 0.653986930847168, 0.6884071826934814, 0.6475567817687988, 0.7149346470832825, 0.6674469709396362, 0.6905806064605713, 0.6958968639373779, 0.6695951223373413, 0.6607276201248169, 0.5651098489761353, 0.686192512512207, 0.6436470746994019, 0.7354074716567993, 0.6728092432022095, 0.63885498046875, 0.6850693821907043, 0.6183299422264099, 0.6256016492843628, 0.639796257019043, 0.5749609470367432, 0.6310011148452759, 0.6092445850372314, 0.6728092432022095, 0.7145127058029175, 0.7215414047241211, 0.6380918025970459, 0.6102471947669983, 0.6061940789222717, 0.6720809936523438, 0.6305760145187378, 0.6200056076049805, 0.6274625658988953, 0.632763683795929, 0.6354004144668579, 0.6292503476142883, 0.6146945357322693, 0.7015441060066223, 0.7038581371307373, 0.7442997694015503, 0.6960017681121826, 0.6376725435256958, 0.7186888456344604, 0.7017020583152771, 0.6444715857505798, 0.645115852355957, 0.6695951223373413, 0.7030900716781616, 0.6594641804695129, 0.6511355638504028, 0.670850932598114, 0.6637084484100342, 0.6652007102966309, 0.6172242164611816, 0.6618764400482178, 0.7117727398872375, 0.6506229639053345, 0.6752663850784302, 0.6709433794021606, 0.6129387617111206, 0.6714940071105957, 0.6215537786483765, 0.634405791759491, 0.6786692142486572, 0.6381163001060486, 0.6431194543838501, 0.6446937322616577, 0.7034503817558289, 0.6750296950340271, 0.6431194543838501, 0.6297930479049683, 0.6614902019500732, 0.7036568522453308, 0.7035747170448303, 0.6525664925575256, 0.664370059967041, 0.6461271047592163, 0.6420427560806274, 0.7107009887695312, 0.5930925607681274, 0.6534987688064575, 0.5802127122879028, 0.5841628313064575, 0.5828155279159546, 0.6076114773750305, 0.630884051322937, 0.6329199075698853, 0.6274378299713135, 0.7255188226699829, 0.6505237221717834, 0.6168854236602783, 0.69868004322052, 0.7145512104034424, 0.614241898059845, 0.7012088298797607, 0.5719839334487915, 0.7007924318313599, 0.6470352411270142, 0.6455405354499817, 0.6219719052314758, 0.6818360686302185, 0.6467722654342651, 0.638305127620697, 0.6174378991127014, 0.6834385991096497, 0.5841131210327148, 0.7136521339416504, 0.6379978656768799, 0.6614902019500732, 0.6163577437400818, 0.6814242601394653, 0.6254103779792786, 0.6530559062957764, 0.6662142276763916, 0.6625826358795166, 0.7815357446670532, 0.613782525062561, 0.6814961433410645, 0.649417519569397, 0.6676366329193115, 0.7138770818710327, 0.575859010219574, 0.6543785333633423, 0.6831235289573669, 0.7610270977020264, 0.5964791178703308, 0.61707603931427, 0.7417882680892944, 0.6578503251075745, 0.6601608991622925, 0.6467682123184204, 0.6362208127975464, 0.6202008128166199, 0.6911717057228088, 0.6558947563171387, 0.6574243307113647, 0.6591068506240845, 0.7119053602218628, 0.6902878880500793, 0.64151930809021, 0.653986930847168, 0.6486729383468628, 0.6955016851425171, 0.6511325240135193, 0.616610050201416, 0.6176892518997192, 0.696763277053833, 0.6544588804244995, 0.6018832921981812, 0.642035186290741, 0.6442627310752869, 0.618169367313385, 0.6206363439559937, 0.5965011119842529, 0.613848865032196, 0.5722965002059937, 0.699359655380249, 0.6230509877204895, 0.6103103160858154, 0.7074143886566162, 0.6018832921981812, 0.6620736122131348, 0.5875484943389893, 0.6559320688247681, 0.6732835173606873, 0.6472741961479187, 0.5954416394233704, 0.6570544242858887, 0.6198580265045166, 0.6425281763076782, 0.6396463513374329, 0.6850342750549316, 0.6922572255134583, 0.6416543126106262, 0.7060319185256958, 0.6107981204986572] | [0.6819447875022888, 0.6237518191337585, 0.6210329532623291, 0.6496105194091797, 0.6406546831130981, 0.6987809538841248, 0.6535740494728088, 0.6347569823265076, 0.6316038370132446, 0.6504077911376953, 0.6995862722396851, 0.7552177906036377, 0.7105170488357544, 0.6525439620018005, 0.6849752068519592, 0.6063523888587952, 0.7113005518913269, 0.6113541126251221, 0.6472032070159912, 0.6417624950408936, 0.6848362684249878, 0.6341288089752197, 0.6587539315223694, 0.6535558700561523, 0.6707817912101746, 0.7573590278625488, 0.6612569689750671, 0.6863697171211243, 0.0, 0.6273494362831116, 0.634183406829834, 0.650163471698761, 0.6300655007362366, 0.663674533367157, 0.6848276853561401, 0.6500040292739868, 0.6493666172027588, 0.6429738402366638, 0.6844244003295898, 0.5735296010971069, 0.6425740122795105, 0.685366690158844, 0.5781256556510925, 0.6315850019454956, 0.617463231086731, 0.6380019783973694, 0.7125543355941772, 0.6397220492362976, 0.6411839723587036, 0.6656959056854248, 0.6208827495574951, 0.6617777347564697, 0.6757005453109741, 0.6630263924598694, 0.6871755123138428, 0.6383203268051147, 0.6837741136550903, 0.6580090522766113, 0.6887449622154236, 0.616143524646759, 0.6605228781700134, 0.6793473362922668, 0.7277535796165466, 0.6468941569328308, 0.6386851072311401, 0.5990237593650818, 0.7026257514953613, 0.655692458152771, 0.6345322728157043, 0.6944995522499084, 0.6690546274185181, 0.6243235468864441, 0.6807804107666016, 0.6512463092803955, 0.6477726697921753, 0.6240939497947693, 0.7407692074775696, 0.6579035520553589, 0.6744500994682312, 0.6765727400779724, 0.6698594093322754, 0.6994805932044983, 0.6528500318527222, 0.632416307926178, 0.6694273352622986, 0.6144805550575256, 0.6249997615814209, 0.6500590443611145, 0.6660285592079163, 0.6708601713180542, 0.6531344056129456, 0.7157806754112244, 0.6805092692375183, 0.6894382834434509, 0.7015286087989807, 0.6677038669586182, 0.6622925996780396, 0.5756649971008301, 0.6702190041542053, 0.6577562689781189, 0.7227625250816345, 0.6644143462181091, 0.6417624950408936, 0.7031327486038208, 0.6245892643928528, 0.6319908499717712, 0.6355578303337097, 0.6005885004997253, 0.634077787399292, 0.6266100406646729, 0.6644143462181091, 0.7431548833847046, 0.7175079584121704, 0.6312451958656311, 0.6183282732963562, 0.6126790046691895, 0.6649893522262573, 0.6352108120918274, 0.6330819129943848, 0.6353726983070374, 0.6284132599830627, 0.635782778263092, 0.6339262127876282, 0.6444721221923828, 0.701200008392334, 0.711179792881012, 0.7433462738990784, 0.6858134865760803, 0.6490651369094849, 0.7060649394989014, 0.6997130513191223, 0.6558163166046143, 0.6320511102676392, 0.6677038669586182, 0.7150518894195557, 0.6744108200073242, 0.6631805896759033, 0.6822446584701538, 0.6742724180221558, 0.6649101376533508, 0.6085717082023621, 0.6602567434310913, 0.7059043645858765, 0.6570013761520386, 0.6751335859298706, 0.6673610806465149, 0.6253426671028137, 0.6877955198287964, 0.6140316724777222, 0.6567124128341675, 0.6651222705841064, 0.6465886235237122, 0.6526815891265869, 0.6464427709579468, 0.6873394846916199, 0.6677538752555847, 0.6526815891265869, 0.6264541149139404, 0.6667430400848389, 0.7017452716827393, 0.6990779638290405, 0.6570529937744141, 0.658830463886261, 0.6545981168746948, 0.6480981707572937, 0.7188496589660645, 0.6063523888587952, 0.6575473546981812, 0.5949907898902893, 0.5963855385780334, 0.5910975337028503, 0.6269011497497559, 0.6406179070472717, 0.6435911059379578, 0.6244329214096069, 0.7215666174888611, 0.6577332615852356, 0.6183096766471863, 0.6878883242607117, 0.711501955986023, 0.6001497507095337, 0.7148705720901489, 0.5943233966827393, 0.691120982170105, 0.6609389185905457, 0.6496890187263489, 0.6234155893325806, 0.6857404708862305, 0.659831702709198, 0.6387202739715576, 0.6299822330474854, 0.6766649484634399, 0.6015020608901978, 0.729654848575592, 0.6448344588279724, 0.6667430400848389, 0.6117817759513855, 0.6982972025871277, 0.6296259164810181, 0.6506471037864685, 0.6752481460571289, 0.663357138633728, 0.7640447020530701, 0.6380670666694641, 0.6977759003639221, 0.659806489944458, 0.677850067615509, 0.725275993347168, 0.5850949883460999, 0.6606469750404358, 0.6859250068664551, 0.7559133172035217, 0.6021458506584167, 0.6210227608680725, 0.7262578010559082, 0.6666058897972107, 0.6656220555305481, 0.6415835618972778, 0.6257185339927673, 0.6291186213493347, 0.6886739730834961, 0.6597923040390015, 0.6521323919296265, 0.6585107445716858, 0.7032998204231262, 0.6664939522743225, 0.6474950313568115, 0.6660285592079163, 0.648423969745636, 0.7059741020202637, 0.6553012132644653, 0.6036092042922974, 0.6287460923194885, 0.695151150226593, 0.6510932445526123, 0.6082441806793213, 0.6474537253379822, 0.6304798722267151, 0.6196913719177246, 0.6174681782722473, 0.6118583083152771, 0.6185640692710876, 0.5591370463371277, 0.6995862722396851, 0.6210790872573853, 0.6252261400222778, 0.6667838096618652, 0.6082441806793213, 0.6676766276359558, 0.6014150381088257, 0.6526446342468262, 0.669977605342865, 0.642791748046875, 0.6052584648132324, 0.6488881707191467, 0.6292690634727478, 0.6488266587257385, 0.6400451064109802, 0.6932361125946045, 0.6832649111747742, 0.6462348103523254, 0.7328572273254395, 0.6090907454490662] | bert-base-multilingual-cased_L9_no-idf_version=0.3.12(hug_trans=4.40.1) |
| 3.5442 | 3.0 | 561 | 2.8878 | [0.6929003596305847, 0.6183862686157227, 0.6369297504425049, 0.6558735370635986, 0.6581195592880249, 0.7002508640289307, 0.6906453371047974, 0.6295247077941895, 0.6665722131729126, 0.6738836765289307, 0.6891133189201355, 0.7715317010879517, 0.6965898871421814, 0.6664015054702759, 0.688916802406311, 0.6108902096748352, 0.7203189134597778, 0.5999206304550171, 0.6118876934051514, 0.6446966528892517, 0.6714366674423218, 0.6355723738670349, 0.6943302154541016, 0.6544321775436401, 0.6650948524475098, 0.7430660724639893, 0.6517236232757568, 0.7484419345855713, 0.0, 0.6757192611694336, 0.64581298828125, 0.6234389543533325, 0.6472301483154297, 0.6631240248680115, 0.6684065461158752, 0.6660122275352478, 0.6794019937515259, 0.6357455849647522, 0.6739323139190674, 0.5535992383956909, 0.6507012248039246, 0.6612800359725952, 0.5988056063652039, 0.67428058385849, 0.6372946500778198, 0.6437563300132751, 0.7560627460479736, 0.6432757377624512, 0.6868467330932617, 0.717696487903595, 0.6435254812240601, 0.6546081900596619, 0.6591821908950806, 0.6520820260047913, 0.7302312850952148, 0.6462039947509766, 0.6925369501113892, 0.6566042900085449, 0.680228054523468, 0.6302398443222046, 0.6685198545455933, 0.704545259475708, 0.7441930174827576, 0.6529860496520996, 0.6187940239906311, 0.6020010709762573, 0.7059713006019592, 0.6300128102302551, 0.6296300888061523, 0.6843270063400269, 0.6368266344070435, 0.6201039552688599, 0.6559895277023315, 0.6613433361053467, 0.6744482517242432, 0.6402202844619751, 0.7750440239906311, 0.6565803289413452, 0.6726486682891846, 0.6894751787185669, 0.6713679432868958, 0.7112835645675659, 0.6630656719207764, 0.6581467986106873, 0.659957766532898, 0.6379042863845825, 0.6446895599365234, 0.6501699686050415, 0.652809739112854, 0.6526193618774414, 0.6588089466094971, 0.7029021382331848, 0.7003600597381592, 0.6960921883583069, 0.7072522640228271, 0.656990647315979, 0.6638649702072144, 0.5866219401359558, 0.6549721956253052, 0.6724978685379028, 0.7105451822280884, 0.673061728477478, 0.6446966528892517, 0.7279029488563538, 0.6328046321868896, 0.6385118365287781, 0.6381011009216309, 0.6286071538925171, 0.637184739112854, 0.6614517569541931, 0.673061728477478, 0.7741892337799072, 0.6959199905395508, 0.6332329511642456, 0.6720056533813477, 0.6193041801452637, 0.658045768737793, 0.6184666156768799, 0.6467216610908508, 0.6735948324203491, 0.6532154083251953, 0.6322343349456787, 0.6386721730232239, 0.6201406121253967, 0.7008563280105591, 0.7186553478240967, 0.742395281791687, 0.6759191751480103, 0.660872220993042, 0.6856606006622314, 0.6977353096008301, 0.674822986125946, 0.6139886975288391, 0.656990647315979, 0.6655586957931519, 0.6928428411483765, 0.6648602485656738, 0.6940321922302246, 0.6745078563690186, 0.6738842725753784, 0.654864490032196, 0.6751419305801392, 0.7360048294067383, 0.6635060906410217, 0.6750009655952454, 0.6638166904449463, 0.6382589340209961, 0.7049083709716797, 0.6066895127296448, 0.6806449294090271, 0.652288556098938, 0.655288815498352, 0.6625323295593262, 0.6482013463973999, 0.6459664702415466, 0.6606333255767822, 0.6625323295593262, 0.627372682094574, 0.6529359817504883, 0.7366055846214294, 0.6946382522583008, 0.6616016626358032, 0.6862121224403381, 0.6149857640266418, 0.6542688608169556, 0.6783124208450317, 0.6108902096748352, 0.6770160794258118, 0.6105413436889648, 0.5959399938583374, 0.5996183156967163, 0.6474558115005493, 0.6448728442192078, 0.6546282768249512, 0.6511132121086121, 0.6678274869918823, 0.6700403690338135, 0.6197404861450195, 0.6774248480796814, 0.7487529516220093, 0.6103639006614685, 0.7290751934051514, 0.6184787154197693, 0.6821150779724121, 0.6438043117523193, 0.6481399536132812, 0.6248659491539001, 0.6989448070526123, 0.6970688700675964, 0.6207034587860107, 0.6517062187194824, 0.6871163249015808, 0.6199581623077393, 0.7777968645095825, 0.6263978481292725, 0.6529359817504883, 0.6415587663650513, 0.700570821762085, 0.6572086215019226, 0.6623386144638062, 0.67547208070755, 0.6377427577972412, 0.7473193407058716, 0.6990208029747009, 0.677553653717041, 0.660439670085907, 0.6883809566497803, 0.7201712131500244, 0.6008521914482117, 0.6670366525650024, 0.6777240037918091, 0.7508677840232849, 0.6079212427139282, 0.6250202655792236, 0.6700884103775024, 0.6235026121139526, 0.6438812613487244, 0.6270776987075806, 0.6213839054107666, 0.6382966041564941, 0.6957192420959473, 0.6555189490318298, 0.6672542095184326, 0.6884251832962036, 0.7184816598892212, 0.6715911626815796, 0.6253701448440552, 0.652809739112854, 0.6686025857925415, 0.7285609245300293, 0.6595237255096436, 0.5911452770233154, 0.665739893913269, 0.6692743897438049, 0.6335342526435852, 0.6147409677505493, 0.6529644727706909, 0.6334608793258667, 0.6212208271026611, 0.6143321990966797, 0.6346527338027954, 0.6133826375007629, 0.5720106959342957, 0.6891133189201355, 0.6206754446029663, 0.6408892869949341, 0.6567215919494629, 0.6147409677505493, 0.6733753681182861, 0.6358270645141602, 0.6447792649269104, 0.6651828289031982, 0.6551745533943176, 0.6154043674468994, 0.640922486782074, 0.6389703154563904, 0.655249834060669, 0.640444278717041, 0.6667684316635132, 0.6733943819999695, 0.6508811712265015, 0.7618014812469482, 0.646928071975708] | [0.6649062037467957, 0.6292112469673157, 0.6429741382598877, 0.6434659957885742, 0.6316436529159546, 0.6973171234130859, 0.6775014996528625, 0.6498724818229675, 0.658988893032074, 0.6445818543434143, 0.6894662976264954, 0.7794148325920105, 0.6915059685707092, 0.6392509937286377, 0.6810783743858337, 0.5857178568840027, 0.7025052309036255, 0.6184635162353516, 0.595028281211853, 0.63885498046875, 0.6769837737083435, 0.6243550777435303, 0.6266457438468933, 0.652681827545166, 0.6484395861625671, 0.7511179447174072, 0.653579831123352, 0.698567271232605, 0.0, 0.6781419515609741, 0.6337482929229736, 0.653876543045044, 0.6137877106666565, 0.6642259359359741, 0.7020760774612427, 0.634747326374054, 0.7031090259552002, 0.6552717685699463, 0.6676676273345947, 0.5695792436599731, 0.6346472501754761, 0.6770588159561157, 0.5935205221176147, 0.6778455972671509, 0.5988288521766663, 0.6323497295379639, 0.7332395911216736, 0.6562079787254333, 0.6720508337020874, 0.7192550897598267, 0.5997792482376099, 0.6752731800079346, 0.6813293695449829, 0.6666642427444458, 0.68291175365448, 0.6306266784667969, 0.6752303838729858, 0.6781089901924133, 0.6974778175354004, 0.6026639938354492, 0.6861846446990967, 0.6558896899223328, 0.7120248079299927, 0.6409149169921875, 0.6049350500106812, 0.5960757732391357, 0.6993117332458496, 0.6422098278999329, 0.6395115256309509, 0.6887341141700745, 0.6432803869247437, 0.6286009550094604, 0.6442239284515381, 0.6414529085159302, 0.6704854965209961, 0.6087600588798523, 0.7528954148292542, 0.6441894769668579, 0.6810312271118164, 0.7013686895370483, 0.6683577299118042, 0.6880629062652588, 0.6429443359375, 0.6269145011901855, 0.6791725754737854, 0.5927161574363708, 0.6064770221710205, 0.6499481201171875, 0.651779055595398, 0.6646026372909546, 0.647556722164154, 0.720411479473114, 0.6621347665786743, 0.6755838990211487, 0.6958968639373779, 0.6566475033760071, 0.6607276201248169, 0.5651098489761353, 0.686192512512207, 0.6436470746994019, 0.7354074716567993, 0.6788778305053711, 0.63885498046875, 0.6869735717773438, 0.6263009309768677, 0.6256016492843628, 0.6474004983901978, 0.5749609470367432, 0.6310012340545654, 0.6260640621185303, 0.6788778305053711, 0.7145127058029175, 0.6636099815368652, 0.6443899869918823, 0.6437113881111145, 0.6061940789222717, 0.6720809936523438, 0.6183594465255737, 0.6200056076049805, 0.6468428373336792, 0.6577684879302979, 0.6302847862243652, 0.6292503476142883, 0.573969841003418, 0.7015441060066223, 0.7038581371307373, 0.7442997694015503, 0.6960017681121826, 0.6376725435256958, 0.6901686191558838, 0.7017020583152771, 0.6441092491149902, 0.615876317024231, 0.6566475033760071, 0.6544449329376221, 0.6582697629928589, 0.6453942656517029, 0.670850932598114, 0.6690812110900879, 0.655265748500824, 0.6520406603813171, 0.678108274936676, 0.7265080213546753, 0.6506229639053345, 0.6752663850784302, 0.6709432601928711, 0.6129387617111206, 0.6714939475059509, 0.6215537786483765, 0.634405791759491, 0.6656298637390137, 0.6381163001060486, 0.6431194543838501, 0.6446937322616577, 0.7117900848388672, 0.6750296950340271, 0.6431194543838501, 0.6313208341598511, 0.6238364577293396, 0.7564071416854858, 0.7035747170448303, 0.6525664925575256, 0.6599531173706055, 0.6230310797691345, 0.6420427560806274, 0.6731039881706238, 0.5857178568840027, 0.6823000907897949, 0.5802127122879028, 0.5710383653640747, 0.5828155279159546, 0.6076114773750305, 0.6259433031082153, 0.6329199075698853, 0.6352204084396362, 0.6752637624740601, 0.6526370644569397, 0.6168854236602783, 0.69868004322052, 0.7261859178543091, 0.6449594497680664, 0.7012088298797607, 0.5719839334487915, 0.7031817436218262, 0.6338691711425781, 0.6436284780502319, 0.6219719052314758, 0.6903827786445618, 0.6890453100204468, 0.6180089712142944, 0.6410292387008667, 0.689997673034668, 0.5841131210327148, 0.6860304474830627, 0.6241101026535034, 0.6238364577293396, 0.6247932314872742, 0.697077214717865, 0.6520651578903198, 0.6533130407333374, 0.6622908115386963, 0.6181600689888, 0.7815357446670532, 0.7055652141571045, 0.6747891902923584, 0.646526575088501, 0.6676365733146667, 0.6963860392570496, 0.588309109210968, 0.6543785333633423, 0.6799383163452148, 0.7610270977020264, 0.5964791178703308, 0.61707603931427, 0.698261022567749, 0.6122673749923706, 0.6471544504165649, 0.6512565612792969, 0.6274517178535461, 0.6202008128166199, 0.6825913190841675, 0.680517315864563, 0.6651104688644409, 0.6720331907272339, 0.7130998373031616, 0.6868336200714111, 0.6108724474906921, 0.651779055595398, 0.6862437725067139, 0.7022161483764648, 0.6511325240135193, 0.616610050201416, 0.6444504261016846, 0.6764492988586426, 0.6493207216262817, 0.6018832921981812, 0.642035186290741, 0.6291443109512329, 0.618169367313385, 0.6206363439559937, 0.6173461079597473, 0.589404821395874, 0.5812570452690125, 0.6894662976264954, 0.6146183013916016, 0.6103103160858154, 0.7096463441848755, 0.6018832921981812, 0.6620736122131348, 0.6097139716148376, 0.6497476100921631, 0.6766404509544373, 0.6051311492919922, 0.5954416394233704, 0.6570544242858887, 0.6198580265045166, 0.6425281763076782, 0.6396463513374329, 0.6707313656806946, 0.6983391046524048, 0.6416543126106262, 0.7060319185256958, 0.6359407305717468] | [0.6786146759986877, 0.6237518191337585, 0.639937698841095, 0.6496105194091797, 0.6446098685264587, 0.6987809538841248, 0.6840102076530457, 0.6395367383956909, 0.6627588868141174, 0.6589072346687317, 0.6892897486686707, 0.7754532694816589, 0.6940386295318604, 0.6525439620018005, 0.6849752068519592, 0.5980392694473267, 0.7113005518913269, 0.609050989151001, 0.603340208530426, 0.6417624950408936, 0.6741988658905029, 0.6299137473106384, 0.6587539315223694, 0.6535558700561523, 0.6566615700721741, 0.7470703125, 0.6526504158973694, 0.722645103931427, 0.0, 0.6769284009933472, 0.6397237777709961, 0.6382951140403748, 0.6300655007362366, 0.663674533367157, 0.6848276853561401, 0.6500040292739868, 0.691052258014679, 0.645361065864563, 0.6707853674888611, 0.5614755749702454, 0.6425740122795105, 0.6690763831138611, 0.5961514115333557, 0.6760584115982056, 0.617463231086731, 0.6380019783973694, 0.7444761991500854, 0.6496774554252625, 0.679368257522583, 0.7184749245643616, 0.6208827495574951, 0.6647800803184509, 0.6700728535652161, 0.659292459487915, 0.7057793140411377, 0.6383203268051147, 0.6837741732597351, 0.6671833992004395, 0.6887449622154236, 0.616143524646759, 0.6772370338439941, 0.6793473362922668, 0.7277535796165466, 0.6468941569328308, 0.6117860078811646, 0.5990237593650818, 0.7026257514953613, 0.6360528469085693, 0.6345322728157043, 0.6865234375, 0.6400372385978699, 0.6243235468864441, 0.6500535011291504, 0.6512462496757507, 0.6724610328674316, 0.6240939497947693, 0.7638092041015625, 0.6503258943557739, 0.6768140196800232, 0.6953710913658142, 0.6698594093322754, 0.6994805932044983, 0.6528500318527222, 0.6421510577201843, 0.6694273352622986, 0.6144805550575256, 0.6249997615814209, 0.6500590443611145, 0.6522939801216125, 0.6585564613342285, 0.6531343460083008, 0.7115491032600403, 0.6807112097740173, 0.6856847405433655, 0.7015286087989807, 0.6568189859390259, 0.6622925996780396, 0.5756649971008301, 0.6702190041542053, 0.6577562689781189, 0.7227625250816345, 0.6759572625160217, 0.6417624950408936, 0.7068462371826172, 0.629535973072052, 0.6319908499717712, 0.642717182636261, 0.6005885004997253, 0.6340779066085815, 0.6432715654373169, 0.6759572625160217, 0.7431548833847046, 0.6793810129165649, 0.6387627720832825, 0.6575543284416199, 0.6126790046691895, 0.6649893522262573, 0.6184130311012268, 0.6330819129943848, 0.6599478125572205, 0.6554840803146362, 0.6312580704689026, 0.6339262127876282, 0.5961626768112183, 0.701200008392334, 0.711179792881012, 0.7433462738990784, 0.6858134865760803, 0.6490651369094849, 0.6879072189331055, 0.6997130513191223, 0.6591084599494934, 0.614931046962738, 0.6568189859390259, 0.6599550247192383, 0.6751139760017395, 0.6549826264381409, 0.6822446584701538, 0.6717835664749146, 0.664444625377655, 0.653449535369873, 0.6766218543052673, 0.7312255501747131, 0.6570013761520386, 0.6751335859298706, 0.6673609614372253, 0.6253426671028137, 0.6877955198287964, 0.6140316724777222, 0.6567124128341675, 0.6588916778564453, 0.6465886235237122, 0.6526815891265869, 0.6464427709579468, 0.6772826910018921, 0.6677538752555847, 0.6526815891265869, 0.6293405890464783, 0.6380545496940613, 0.7463750839233398, 0.6990779638290405, 0.6570529937744141, 0.6728265285491943, 0.6189823150634766, 0.6480981707572937, 0.6756982207298279, 0.5980392694473267, 0.6796478629112244, 0.5949907898902893, 0.5832235217094421, 0.5910975337028503, 0.6269011497497559, 0.6352670788764954, 0.6435911059379578, 0.6430686712265015, 0.6715250611305237, 0.6612242460250854, 0.6183096766471863, 0.6878883242607117, 0.7372967600822449, 0.6271849274635315, 0.7148705720901489, 0.5943233966827393, 0.6924881935119629, 0.6387981176376343, 0.6458763480186462, 0.6234155893325806, 0.6946374177932739, 0.6930339336395264, 0.6193532943725586, 0.6463236212730408, 0.6885539293289185, 0.6015020608901978, 0.7290372252464294, 0.6252518892288208, 0.6380545496940613, 0.6330650448799133, 0.698819637298584, 0.6546268463134766, 0.6577948927879333, 0.6688165664672852, 0.6277987957000732, 0.7640447020530701, 0.7022777199745178, 0.6761686205863953, 0.6534090042114258, 0.677850067615509, 0.708078920841217, 0.5945144891738892, 0.6606469750404358, 0.6788293719291687, 0.7559133172035217, 0.6021458506584167, 0.6210227608680725, 0.6838846802711487, 0.6178339123725891, 0.6455137133598328, 0.6389384865760803, 0.6244031190872192, 0.6291186213493347, 0.689092755317688, 0.6677843332290649, 0.6661806106567383, 0.6801304221153259, 0.7157806158065796, 0.6791268587112427, 0.6180363297462463, 0.6522939801216125, 0.6773083209991455, 0.7151459455490112, 0.6553012132644653, 0.6036092042922974, 0.6549221873283386, 0.6728427410125732, 0.6413303017616272, 0.6082441806793213, 0.6474537253379822, 0.6312952637672424, 0.6196913719177246, 0.6174681782722473, 0.6258797645568848, 0.6011547446250916, 0.5765967965126038, 0.6892897486686707, 0.6176320314407349, 0.6252261400222778, 0.6821589469909668, 0.6082441806793213, 0.6676766872406006, 0.6224967241287231, 0.6472539305686951, 0.6708626747131348, 0.6291593313217163, 0.6052584648132324, 0.6488881707191467, 0.6292690634727478, 0.6488266587257385, 0.6400451064109802, 0.6687440276145935, 0.6856399774551392, 0.6462348103523254, 0.7328572273254395, 0.6413873434066772] | bert-base-multilingual-cased_L9_no-idf_version=0.3.12(hug_trans=4.40.1) |
| 3.5442 | 4.0 | 748 | 2.8569 | [0.6929003596305847, 0.6133741736412048, 0.6369296908378601, 0.6558735370635986, 0.6581195592880249, 0.7057837843894958, 0.6906453371047974, 0.6295247077941895, 0.6665722131729126, 0.6738836765289307, 0.7086416482925415, 0.7715317010879517, 0.6965898871421814, 0.6664015054702759, 0.688916802406311, 0.635832667350769, 0.7033517956733704, 0.6183526515960693, 0.6118876934051514, 0.6398658752441406, 0.6714366674423218, 0.6355724334716797, 0.6943302154541016, 0.6544321179389954, 0.6650948524475098, 0.7430660724639893, 0.6517236232757568, 0.7484419345855713, 0.0, 0.675719141960144, 0.6458128690719604, 0.6234389543533325, 0.6472299098968506, 0.6631240248680115, 0.66840660572052, 0.6660122275352478, 0.6794020533561707, 0.6357454061508179, 0.6874514818191528, 0.5535992383956909, 0.7105198502540588, 0.6797340512275696, 0.5988055467605591, 0.67428058385849, 0.6372944712638855, 0.6437563300132751, 0.7560627460479736, 0.6432757377624512, 0.6996425986289978, 0.7656958699226379, 0.6435254812240601, 0.6552819013595581, 0.6591821312904358, 0.6520820260047913, 0.7302312850952148, 0.6462039947509766, 0.6925369501113892, 0.6566042900085449, 0.680228054523468, 0.6302398443222046, 0.6504802703857422, 0.704545259475708, 0.7568456530570984, 0.6529860496520996, 0.6187940239906311, 0.6020010709762573, 0.7059713006019592, 0.6300128102302551, 0.6296300888061523, 0.6843270063400269, 0.6368266344070435, 0.6201039552688599, 0.6590875387191772, 0.6613433361053467, 0.6574115753173828, 0.6770792007446289, 0.7750440239906311, 0.6565803289413452, 0.698392391204834, 0.6894751787185669, 0.6713679432868958, 0.7112835645675659, 0.668113112449646, 0.6581467986106873, 0.659957766532898, 0.6379042863845825, 0.6446895599365234, 0.6501699686050415, 0.652809739112854, 0.6526193618774414, 0.6537817716598511, 0.7116091847419739, 0.7003600597381592, 0.6960922479629517, 0.7021352052688599, 0.656990647315979, 0.6734673976898193, 0.5993854999542236, 0.6549721956253052, 0.7079775333404541, 0.7105451822280884, 0.673061728477478, 0.6398658752441406, 0.7279029488563538, 0.632804811000824, 0.6385118365287781, 0.6381011009216309, 0.694757878780365, 0.637184739112854, 0.6614517569541931, 0.673061728477478, 0.6937723159790039, 0.6959199905395508, 0.6462610960006714, 0.6720056533813477, 0.6341558694839478, 0.658045768737793, 0.6055939197540283, 0.6955231428146362, 0.6746744513511658, 0.6259623169898987, 0.6322343349456787, 0.6386721730232239, 0.6636806130409241, 0.7008563280105591, 0.7186553478240967, 0.742395281791687, 0.5930124521255493, 0.660872220993042, 0.685660719871521, 0.6977353096008301, 0.661785364151001, 0.6491066217422485, 0.656990647315979, 0.6655586957931519, 0.6928428411483765, 0.6648601293563843, 0.6940321922302246, 0.6745078563690186, 0.673884391784668, 0.654864490032196, 0.6751419305801392, 0.7360048294067383, 0.6635060906410217, 0.6750009655952454, 0.6638166904449463, 0.6382589340209961, 0.7049083709716797, 0.6431376934051514, 0.6806449294090271, 0.652288556098938, 0.655288815498352, 0.6625323295593262, 0.6482013463973999, 0.6719499826431274, 0.6514257192611694, 0.6625323295593262, 0.627372682094574, 0.6609578132629395, 0.7002651691436768, 0.6899218559265137, 0.6566691398620605, 0.6862121224403381, 0.6149857640266418, 0.6542688608169556, 0.6522969603538513, 0.635832667350769, 0.6770160794258118, 0.6105413436889648, 0.5959399938583374, 0.5996183156967163, 0.6474559307098389, 0.6497341394424438, 0.6824425458908081, 0.6511133313179016, 0.6678274869918823, 0.6700403690338135, 0.6197404861450195, 0.6774248480796814, 0.7487529516220093, 0.5847424864768982, 0.7290751934051514, 0.6362239718437195, 0.6821150779724121, 0.6743643879890442, 0.6179369688034058, 0.6248660087585449, 0.6989448070526123, 0.7265845537185669, 0.6207034587860107, 0.6517062187194824, 0.6871163249015808, 0.6199581623077393, 0.7777969241142273, 0.6263978481292725, 0.6609578132629395, 0.6415587663650513, 0.700570821762085, 0.657017171382904, 0.6623384952545166, 0.67547208070755, 0.6563198566436768, 0.7473193407058716, 0.6515363454818726, 0.6862262487411499, 0.6500843167304993, 0.6724016666412354, 0.7201712131500244, 0.6008522510528564, 0.667036771774292, 0.6777238845825195, 0.7508677840232849, 0.6464987397193909, 0.6168344020843506, 0.6700884103775024, 0.6235026121139526, 0.6438812613487244, 0.6425885558128357, 0.6213839054107666, 0.6382965445518494, 0.6957192420959473, 0.6555188894271851, 0.6971651911735535, 0.7006597518920898, 0.7391626238822937, 0.6715911626815796, 0.6253701448440552, 0.652809739112854, 0.6811630725860596, 0.7285609245300293, 0.6595237255096436, 0.5911452770233154, 0.665739893913269, 0.6692743897438049, 0.6335341930389404, 0.6147409677505493, 0.6383231282234192, 0.6025363206863403, 0.6212208271026611, 0.6143320798873901, 0.6346527338027954, 0.6133826375007629, 0.5720106959342957, 0.7086416482925415, 0.6206754446029663, 0.7020885348320007, 0.6567215919494629, 0.6147409677505493, 0.6760094165802002, 0.6580663919448853, 0.6447792649269104, 0.666703999042511, 0.7000113725662231, 0.6154043078422546, 0.640922486782074, 0.6389702558517456, 0.655249834060669, 0.640444278717041, 0.6667684316635132, 0.6733943819999695, 0.6508811712265015, 0.7618014812469482, 0.646928071975708] | [0.6649062037467957, 0.6275476217269897, 0.6429741382598877, 0.6434659957885742, 0.6316436529159546, 0.6886480450630188, 0.6775014996528625, 0.6498724818229675, 0.658988893032074, 0.6445818543434143, 0.7162613868713379, 0.7794148325920105, 0.6915059685707092, 0.6392509937286377, 0.6810783743858337, 0.6284155249595642, 0.7132933139801025, 0.6361820697784424, 0.595028281211853, 0.6523096561431885, 0.6769837737083435, 0.6243551969528198, 0.6266457438468933, 0.6526817083358765, 0.6484395861625671, 0.7511179447174072, 0.653579831123352, 0.698567271232605, 0.0, 0.6781419515609741, 0.6337482333183289, 0.653876543045044, 0.6137877702713013, 0.6642259359359741, 0.7020760774612427, 0.634747326374054, 0.7031091451644897, 0.6552716493606567, 0.6707761287689209, 0.5695792436599731, 0.7114435434341431, 0.6849547624588013, 0.59352046251297, 0.6778455972671509, 0.598828911781311, 0.6323497295379639, 0.7332395911216736, 0.6562079787254333, 0.681423544883728, 0.7390843629837036, 0.5997792482376099, 0.6754531264305115, 0.6813293099403381, 0.6666642427444458, 0.68291175365448, 0.6306267976760864, 0.6752303838729858, 0.6781089901924133, 0.6974778175354004, 0.6026639938354492, 0.6518964171409607, 0.6558896899223328, 0.7168669700622559, 0.6409149169921875, 0.6049350500106812, 0.5960757732391357, 0.6993117332458496, 0.6422098278999329, 0.6395115256309509, 0.6887341141700745, 0.6432803869247437, 0.6286009550094604, 0.6483988761901855, 0.6414529085159302, 0.6532565355300903, 0.6443188190460205, 0.7528954148292542, 0.6441894769668579, 0.7132408618927002, 0.7013686895370483, 0.6683577299118042, 0.6880629062652588, 0.6502605080604553, 0.6269145011901855, 0.6791725754737854, 0.5927161574363708, 0.6064770221710205, 0.6499481201171875, 0.651779055595398, 0.6646026372909546, 0.632445216178894, 0.725436806678772, 0.6621347665786743, 0.6755839586257935, 0.6738181114196777, 0.6566475033760071, 0.6619704961776733, 0.5842774510383606, 0.686192512512207, 0.7009044885635376, 0.7354075908660889, 0.6788778305053711, 0.6523096561431885, 0.6869735717773438, 0.6263010501861572, 0.6256016492843628, 0.6474004983901978, 0.6860323548316956, 0.6310012340545654, 0.6260640621185303, 0.6788778305053711, 0.6427903771400452, 0.6636099815368652, 0.6568259596824646, 0.6437113881111145, 0.6226024627685547, 0.6720809936523438, 0.6236640810966492, 0.699752926826477, 0.636201798915863, 0.6444743871688843, 0.6302847862243652, 0.6292503476142883, 0.6662095785140991, 0.7015441060066223, 0.7038581371307373, 0.7442997694015503, 0.6347897052764893, 0.6376725435256958, 0.6901686191558838, 0.7017020583152771, 0.6469976902008057, 0.621161937713623, 0.6566475033760071, 0.6544449329376221, 0.6582697629928589, 0.6453942656517029, 0.670850932598114, 0.6690812110900879, 0.6552658081054688, 0.6520406603813171, 0.678108274936676, 0.7265080213546753, 0.6506229639053345, 0.6752663850784302, 0.6709432601928711, 0.6129387617111206, 0.6714939475059509, 0.6591033935546875, 0.634405791759491, 0.6656297445297241, 0.6381163001060486, 0.6431194543838501, 0.6446937322616577, 0.7034503817558289, 0.6325691938400269, 0.6431194543838501, 0.6313208341598511, 0.6314756870269775, 0.7084503769874573, 0.7146296501159668, 0.6485159993171692, 0.6599531173706055, 0.6230310797691345, 0.6420427560806274, 0.6593505144119263, 0.6284155249595642, 0.6823000907897949, 0.5802127122879028, 0.5710382461547852, 0.5828155875205994, 0.6076115369796753, 0.6196562051773071, 0.6699979901313782, 0.6352205276489258, 0.6752638816833496, 0.6526370644569397, 0.6168854236602783, 0.69868004322052, 0.7261858582496643, 0.5989089012145996, 0.7012088298797607, 0.5866293907165527, 0.7031817436218262, 0.6830926537513733, 0.6432316303253174, 0.6219719648361206, 0.6903827786445618, 0.6756868362426758, 0.6180089712142944, 0.6410292387008667, 0.689997673034668, 0.5841131210327148, 0.6860305070877075, 0.6241101026535034, 0.6314756870269775, 0.6247932314872742, 0.697077214717865, 0.6323742270469666, 0.6533130407333374, 0.6622908115386963, 0.6254100203514099, 0.7815357446670532, 0.6599846482276917, 0.6557874083518982, 0.6358695030212402, 0.6401605010032654, 0.6963860392570496, 0.5883089900016785, 0.6543786525726318, 0.6799383163452148, 0.7610270977020264, 0.6357758045196533, 0.6123616099357605, 0.698261022567749, 0.6122673749923706, 0.6471544504165649, 0.6635059118270874, 0.6274517178535461, 0.6202007532119751, 0.6825913190841675, 0.6805171966552734, 0.6648705005645752, 0.6754068732261658, 0.6989794373512268, 0.6868336200714111, 0.6108724474906921, 0.651779055595398, 0.6650040149688721, 0.7022161483764648, 0.6511324644088745, 0.616610050201416, 0.6444504261016846, 0.6764492988586426, 0.649320662021637, 0.6018832921981812, 0.6349364519119263, 0.6164268851280212, 0.618169367313385, 0.6206364035606384, 0.6173461079597473, 0.589404821395874, 0.5812569856643677, 0.7162613868713379, 0.6146183013916016, 0.677343487739563, 0.7096463441848755, 0.6018832921981812, 0.6628992557525635, 0.6039727330207825, 0.6497476100921631, 0.6732835173606873, 0.7000452280044556, 0.5954415798187256, 0.6570544838905334, 0.6198580265045166, 0.6425281763076782, 0.6396463513374329, 0.6707313656806946, 0.6983391046524048, 0.6416543126106262, 0.7060319185256958, 0.6359407305717468] | [0.6786146759986877, 0.6203799843788147, 0.6399376392364502, 0.6496105194091797, 0.6446098685264587, 0.697110652923584, 0.6840102076530457, 0.6395367383956909, 0.6627588868141174, 0.6589072346687317, 0.7124311327934265, 0.7754532694816589, 0.6940386295318604, 0.6525439620018005, 0.6849752068519592, 0.632102370262146, 0.7082876563072205, 0.6271407008171082, 0.603340208530426, 0.6460278034210205, 0.6741988658905029, 0.629913866519928, 0.6587539315223694, 0.6535557508468628, 0.6566615700721741, 0.7470703125, 0.6526504158973694, 0.722645103931427, 0.0, 0.6769284009933472, 0.6397237181663513, 0.6382951140403748, 0.630065381526947, 0.663674533367157, 0.6848277449607849, 0.6500040292739868, 0.6910523772239685, 0.6453608870506287, 0.6790114045143127, 0.5614755749702454, 0.7109813690185547, 0.6823344230651855, 0.5961512327194214, 0.6760584115982056, 0.617463231086731, 0.6380019783973694, 0.7444761991500854, 0.6496774554252625, 0.6904129385948181, 0.7521547675132751, 0.6208827495574951, 0.6652146577835083, 0.6700727939605713, 0.659292459487915, 0.7057793140411377, 0.6383203864097595, 0.6837741732597351, 0.6671833992004395, 0.6887449622154236, 0.616143524646759, 0.651187539100647, 0.6793473362922668, 0.7363139986991882, 0.6468941569328308, 0.6117860078811646, 0.5990237593650818, 0.7026257514953613, 0.6360528469085693, 0.6345322728157043, 0.6865234375, 0.6400372385978699, 0.6243235468864441, 0.653699517250061, 0.6512462496757507, 0.6553274989128113, 0.6602928638458252, 0.7638092041015625, 0.6503258943557739, 0.7057384848594666, 0.6953710913658142, 0.6698594093322754, 0.6994805932044983, 0.6590659022331238, 0.6421510577201843, 0.6694273352622986, 0.6144805550575256, 0.6249997615814209, 0.6500590443611145, 0.6522939801216125, 0.6585564613342285, 0.6429365277290344, 0.7184564471244812, 0.6807112097740173, 0.6856848001480103, 0.6876852512359619, 0.6568189859390259, 0.667669415473938, 0.5917350649833679, 0.6702190041542053, 0.7044232487678528, 0.7227626442909241, 0.6759572625160217, 0.6460278034210205, 0.7068462371826172, 0.6295360922813416, 0.6319908499717712, 0.642717182636261, 0.690367579460144, 0.6340779066085815, 0.6432715654373169, 0.6759572625160217, 0.667309045791626, 0.6793810129165649, 0.6515007019042969, 0.6575543284416199, 0.6283260583877563, 0.6649893522262573, 0.6144961714744568, 0.6976316571235657, 0.6548735499382019, 0.6350834965705872, 0.6312580704689026, 0.6339262127876282, 0.6649426817893982, 0.701200008392334, 0.711179792881012, 0.7433462738990784, 0.6131902933120728, 0.6490651369094849, 0.6879072785377502, 0.6997130513191223, 0.6543079614639282, 0.6348268985748291, 0.6568189859390259, 0.6599550247192383, 0.6751139760017395, 0.6549826264381409, 0.6822446584701538, 0.6717835664749146, 0.6644446849822998, 0.653449535369873, 0.6766218543052673, 0.7312255501747131, 0.6570013761520386, 0.6751335859298706, 0.6673609614372253, 0.6253426671028137, 0.6877955198287964, 0.6510226726531982, 0.6567124128341675, 0.6588916182518005, 0.6465886235237122, 0.6526815891265869, 0.6464427709579468, 0.6873394846916199, 0.6418589949607849, 0.6526815891265869, 0.6293405890464783, 0.6458804607391357, 0.704334020614624, 0.7020584344863892, 0.6525671482086182, 0.6728265285491943, 0.6189823150634766, 0.6480981707572937, 0.6558048129081726, 0.632102370262146, 0.6796478629112244, 0.5949907898902893, 0.5832234025001526, 0.5910975933074951, 0.6269012689590454, 0.6343388557434082, 0.6761629581451416, 0.643068790435791, 0.6715250611305237, 0.6612242460250854, 0.6183096766471863, 0.6878883242607117, 0.7372967004776001, 0.5917409062385559, 0.7148705720901489, 0.6104209423065186, 0.6924881935119629, 0.6787004470825195, 0.6303306818008423, 0.6234156489372253, 0.6946374177932739, 0.7002120018005371, 0.6193532943725586, 0.6463236212730408, 0.6885539293289185, 0.6015020608901978, 0.729037344455719, 0.6252518892288208, 0.6458804607391357, 0.6330650448799133, 0.698819637298584, 0.6444602012634277, 0.6577948331832886, 0.6688165664672852, 0.6404922008514404, 0.7640447020530701, 0.6557332873344421, 0.6706616282463074, 0.6428983807563782, 0.6558851003646851, 0.708078920841217, 0.5945144295692444, 0.6606470942497253, 0.6788292527198792, 0.7559133172035217, 0.6410924792289734, 0.6145898699760437, 0.6838846802711487, 0.6178339123725891, 0.6455137133598328, 0.6528797745704651, 0.6244031190872192, 0.6291185617446899, 0.689092755317688, 0.6677842140197754, 0.6806349754333496, 0.6878015995025635, 0.7185096740722656, 0.6791268587112427, 0.6180363297462463, 0.6522939801216125, 0.6729865670204163, 0.7151459455490112, 0.6553012132644653, 0.6036092042922974, 0.6549221873283386, 0.6728427410125732, 0.641330361366272, 0.6082441806793213, 0.6366252899169922, 0.6094024777412415, 0.6196913719177246, 0.6174681782722473, 0.6258797645568848, 0.6011547446250916, 0.5765967965126038, 0.7124311327934265, 0.6176320314407349, 0.6894940733909607, 0.6821589469909668, 0.6082441806793213, 0.6693901419639587, 0.6298602223396301, 0.6472539305686951, 0.669977605342865, 0.7000283002853394, 0.6052583456039429, 0.6488882303237915, 0.6292690634727478, 0.6488266587257385, 0.6400451064109802, 0.6687440276145935, 0.6856399774551392, 0.6462348103523254, 0.7328572273254395, 0.6413873434066772] | bert-base-multilingual-cased_L9_no-idf_version=0.3.12(hug_trans=4.40.1) |
| 3.5442 | 5.0 | 935 | 2.8474 | [0.6929003596305847, 0.6133741736412048, 0.6369296908378601, 0.6576104164123535, 0.6581195592880249, 0.7057837843894958, 0.6906453371047974, 0.6295247077941895, 0.6032506227493286, 0.6801808476448059, 0.7086416482925415, 0.7715317010879517, 0.6965898871421814, 0.6664015054702759, 0.6693047285079956, 0.635832667350769, 0.7033517956733704, 0.6183526515960693, 0.6158597469329834, 0.6398658752441406, 0.6714366674423218, 0.6355723738670349, 0.6943302750587463, 0.6544321775436401, 0.6650948524475098, 0.7430660724639893, 0.6517236232757568, 0.7484419345855713, 0.0, 0.675719141960144, 0.6458128690719604, 0.6234389543533325, 0.6472302079200745, 0.6631240248680115, 0.6684064865112305, 0.6660122275352478, 0.6794019937515259, 0.6357454061508179, 0.6969460248947144, 0.5535992383956909, 0.7105198502540588, 0.6797340512275696, 0.5988055467605591, 0.67428058385849, 0.637294590473175, 0.6437563300132751, 0.7560627460479736, 0.6326324939727783, 0.7101137042045593, 0.7656958699226379, 0.6435254812240601, 0.6552819013595581, 0.6591821312904358, 0.6520820260047913, 0.7302312850952148, 0.6462039947509766, 0.6925369501113892, 0.6566042900085449, 0.695000171661377, 0.6302398443222046, 0.6504288911819458, 0.6807819604873657, 0.7568456530570984, 0.6529860496520996, 0.6187940239906311, 0.6020010709762573, 0.6868752837181091, 0.6300128102302551, 0.6296300888061523, 0.6843270063400269, 0.6368266344070435, 0.6201039552688599, 0.6535858511924744, 0.6613433957099915, 0.6574115753173828, 0.6770792007446289, 0.7600529193878174, 0.6565803289413452, 0.698392391204834, 0.6894751787185669, 0.6980857849121094, 0.7112835645675659, 0.668113112449646, 0.6581467986106873, 0.659957766532898, 0.6379042863845825, 0.6446895599365234, 0.6587048768997192, 0.652809739112854, 0.6526193618774414, 0.6537817716598511, 0.7116091847419739, 0.7003600597381592, 0.6960922479629517, 0.7021352052688599, 0.656990647315979, 0.6734673380851746, 0.5993854999542236, 0.6549721956253052, 0.7079775333404541, 0.7105451822280884, 0.673061728477478, 0.6398658752441406, 0.7279029488563538, 0.632804811000824, 0.6385118365287781, 0.6277474164962769, 0.694757878780365, 0.637184739112854, 0.6614517569541931, 0.673061728477478, 0.6937723159790039, 0.6959199905395508, 0.6462610960006714, 0.6720056533813477, 0.6341558694839478, 0.658045768737793, 0.6242990493774414, 0.6955231428146362, 0.6746744513511658, 0.6259623765945435, 0.6322343349456787, 0.6386721730232239, 0.6636806130409241, 0.7008563280105591, 0.7186553478240967, 0.742395281791687, 0.5930124521255493, 0.660872220993042, 0.6856606006622314, 0.7234644889831543, 0.661785364151001, 0.6491066217422485, 0.656990647315979, 0.6655586957931519, 0.6928428411483765, 0.6648601293563843, 0.6940321922302246, 0.6660351753234863, 0.673884391784668, 0.654864490032196, 0.6751419305801392, 0.7360048294067383, 0.6635060906410217, 0.6750009655952454, 0.6638166904449463, 0.6382589936256409, 0.7049083709716797, 0.6431376934051514, 0.6806449294090271, 0.652288556098938, 0.655288815498352, 0.6625323295593262, 0.6482013463973999, 0.6454973816871643, 0.6514257192611694, 0.6625323295593262, 0.627372682094574, 0.6609578132629395, 0.7002651691436768, 0.6899219155311584, 0.6566691398620605, 0.6862121224403381, 0.6149857640266418, 0.6542688608169556, 0.6522969603538513, 0.635832667350769, 0.6770160794258118, 0.6105413436889648, 0.5959399938583374, 0.5996183753013611, 0.6474558115005493, 0.6497341394424438, 0.6824425458908081, 0.6511133313179016, 0.6678274869918823, 0.6700403690338135, 0.6197404861450195, 0.6774248480796814, 0.748753011226654, 0.5847424864768982, 0.7290750741958618, 0.6362240314483643, 0.6821150779724121, 0.6743643879890442, 0.6179369688034058, 0.6248660087585449, 0.6989448070526123, 0.7265845537185669, 0.6207034587860107, 0.6517062187194824, 0.6871163249015808, 0.6199581623077393, 0.7777968645095825, 0.6263978481292725, 0.6609578132629395, 0.6415587663650513, 0.700570821762085, 0.657017171382904, 0.6716399788856506, 0.67547208070755, 0.6563198566436768, 0.7473193407058716, 0.6515363454818726, 0.7003060579299927, 0.650084376335144, 0.6724016666412354, 0.7201712131500244, 0.6008522510528564, 0.667036771774292, 0.6724625825881958, 0.7508677840232849, 0.6464987397193909, 0.6168344020843506, 0.6700884103775024, 0.6235026121139526, 0.6438812613487244, 0.6425885558128357, 0.6213839054107666, 0.6382965445518494, 0.6957192420959473, 0.6555189490318298, 0.6971650719642639, 0.7006597518920898, 0.7391626834869385, 0.6715911626815796, 0.6253701448440552, 0.652809739112854, 0.6811630725860596, 0.7285609245300293, 0.6595237255096436, 0.5911452770233154, 0.665739893913269, 0.6692743897438049, 0.6335341930389404, 0.6147409677505493, 0.6383231282234192, 0.6025363206863403, 0.6212208271026611, 0.6143321990966797, 0.6346527338027954, 0.6133826375007629, 0.5720107555389404, 0.7086416482925415, 0.6206754446029663, 0.7020885348320007, 0.6567215919494629, 0.6147409677505493, 0.6760094165802002, 0.6580663323402405, 0.6447792649269104, 0.6912583112716675, 0.7000113725662231, 0.6154043078422546, 0.640922486782074, 0.6739322543144226, 0.655249834060669, 0.640444278717041, 0.6667684316635132, 0.6733943819999695, 0.6508811712265015, 0.7618014812469482, 0.646928071975708] | [0.6649062037467957, 0.6275476217269897, 0.6429741382598877, 0.6600346565246582, 0.6316436529159546, 0.6886480450630188, 0.6775014996528625, 0.6498724818229675, 0.572837233543396, 0.6461964845657349, 0.7162613868713379, 0.7794148325920105, 0.6915059685707092, 0.6392509937286377, 0.6521221995353699, 0.6284155249595642, 0.7132933139801025, 0.6361821293830872, 0.6224191784858704, 0.6523096561431885, 0.6769837737083435, 0.6243550777435303, 0.6266458034515381, 0.652681827545166, 0.6484395861625671, 0.7511179447174072, 0.653579831123352, 0.698567271232605, 0.0, 0.6781419515609741, 0.6337482333183289, 0.653876543045044, 0.6137877702713013, 0.6642259359359741, 0.7020760774612427, 0.634747326374054, 0.7031091451644897, 0.6552716493606567, 0.6741659641265869, 0.5695792436599731, 0.7114435434341431, 0.6849547624588013, 0.59352046251297, 0.6778455972671509, 0.5988288521766663, 0.6323497295379639, 0.7332395911216736, 0.6469723582267761, 0.6880676746368408, 0.7390843629837036, 0.5997792482376099, 0.6754531264305115, 0.6813293099403381, 0.6666642427444458, 0.68291175365448, 0.6306267976760864, 0.6752303838729858, 0.6781089901924133, 0.7067015171051025, 0.6026639938354492, 0.6560887694358826, 0.6354584693908691, 0.7168669104576111, 0.6409149169921875, 0.6049350500106812, 0.5960757732391357, 0.669973611831665, 0.6422098278999329, 0.6395115256309509, 0.6887341141700745, 0.6432803869247437, 0.6286009550094604, 0.6391538381576538, 0.6414529085159302, 0.6532565355300903, 0.6443188190460205, 0.7384698390960693, 0.6441894769668579, 0.7132408618927002, 0.7013686895370483, 0.6652677059173584, 0.6880629062652588, 0.6502605080604553, 0.6269145011901855, 0.6791725754737854, 0.5927161574363708, 0.6064770221710205, 0.661729097366333, 0.651779055595398, 0.6646026372909546, 0.632445216178894, 0.725436806678772, 0.6621347665786743, 0.6755839586257935, 0.6738181114196777, 0.6566475033760071, 0.6619706153869629, 0.5842774510383606, 0.686192512512207, 0.7009044885635376, 0.7354075908660889, 0.6788778305053711, 0.6523096561431885, 0.6869735717773438, 0.6263010501861572, 0.6256016492843628, 0.6423155069351196, 0.6860323548316956, 0.6310012340545654, 0.6260640621185303, 0.6788778305053711, 0.6427903771400452, 0.6636099815368652, 0.6568259596824646, 0.6437113881111145, 0.6226024627685547, 0.6720809936523438, 0.6328999996185303, 0.699752926826477, 0.636201798915863, 0.644474446773529, 0.6302847862243652, 0.6292503476142883, 0.6662095785140991, 0.7015441060066223, 0.7038581371307373, 0.7442997694015503, 0.6347897052764893, 0.6376725435256958, 0.6901686191558838, 0.695481538772583, 0.6469976902008057, 0.6211618185043335, 0.6566475033760071, 0.6544449329376221, 0.6582697629928589, 0.6453942656517029, 0.670850932598114, 0.655735969543457, 0.6552658081054688, 0.6520406603813171, 0.678108274936676, 0.7265080213546753, 0.6506229639053345, 0.6752663850784302, 0.6709432601928711, 0.6129387021064758, 0.6714939475059509, 0.6591033935546875, 0.634405791759491, 0.6656297445297241, 0.6381163001060486, 0.6431194543838501, 0.6446937322616577, 0.6875370740890503, 0.6325691938400269, 0.6431194543838501, 0.6313208341598511, 0.6314756870269775, 0.7084503769874573, 0.7146296501159668, 0.6485159993171692, 0.6599531173706055, 0.6230310797691345, 0.6420427560806274, 0.6593505144119263, 0.6284155249595642, 0.6823000907897949, 0.5802127122879028, 0.5710383057594299, 0.5828155279159546, 0.6076115369796753, 0.6196562051773071, 0.6699979901313782, 0.6352205276489258, 0.6752637624740601, 0.6526370644569397, 0.6168854236602783, 0.69868004322052, 0.7261859178543091, 0.5989089012145996, 0.7012088298797607, 0.5866293907165527, 0.7031817436218262, 0.6830926537513733, 0.6432316303253174, 0.6219719648361206, 0.6903827786445618, 0.6756868362426758, 0.6180089712142944, 0.6410292387008667, 0.689997673034668, 0.5841130614280701, 0.6860305070877075, 0.6241101026535034, 0.6314756870269775, 0.6247932314872742, 0.697077214717865, 0.6323742270469666, 0.6652247905731201, 0.6622908115386963, 0.6254100203514099, 0.7815357446670532, 0.6599846482276917, 0.6917272806167603, 0.6358695030212402, 0.6401605010032654, 0.6963860392570496, 0.5883089900016785, 0.6543786525726318, 0.6658977270126343, 0.7610270977020264, 0.6357758045196533, 0.6123616099357605, 0.698261022567749, 0.6122673749923706, 0.6471544504165649, 0.6635059118270874, 0.6274517178535461, 0.6202007532119751, 0.6825913190841675, 0.680517315864563, 0.6648703813552856, 0.6754068732261658, 0.6989795565605164, 0.6868336200714111, 0.6108724474906921, 0.651779055595398, 0.6650040149688721, 0.7022161483764648, 0.6511324644088745, 0.616610050201416, 0.6444504261016846, 0.6764492988586426, 0.6493207216262817, 0.6018832921981812, 0.6349364519119263, 0.6164268851280212, 0.618169367313385, 0.6206363439559937, 0.6173461079597473, 0.589404821395874, 0.5812571048736572, 0.7162613868713379, 0.6146183013916016, 0.677343487739563, 0.7096463441848755, 0.6018832921981812, 0.6628992557525635, 0.6039726734161377, 0.6497476100921631, 0.6936290264129639, 0.7000452280044556, 0.5954415798187256, 0.6570545434951782, 0.6340285539627075, 0.6425281763076782, 0.6396463513374329, 0.6707313656806946, 0.6983391046524048, 0.6416543126106262, 0.7060319185256958, 0.6359407305717468] | [0.6786146759986877, 0.6203799843788147, 0.6399376392364502, 0.6588203310966492, 0.6446098685264587, 0.697110652923584, 0.6840102076530457, 0.6395367383956909, 0.587650716304779, 0.6627532243728638, 0.7124311327934265, 0.7754532694816589, 0.6940386295318604, 0.6525439620018005, 0.660601794719696, 0.632102370262146, 0.7082876563072205, 0.6271407008171082, 0.6191221475601196, 0.6460278034210205, 0.6741988658905029, 0.6299137473106384, 0.6587540507316589, 0.6535558700561523, 0.6566615700721741, 0.7470703125, 0.6526504158973694, 0.722645103931427, 0.0, 0.6769284009933472, 0.6397237181663513, 0.6382951140403748, 0.6300655007362366, 0.663674533367157, 0.6848276853561401, 0.6500040292739868, 0.6910523176193237, 0.6453608870506287, 0.6853667497634888, 0.5614755749702454, 0.7109813690185547, 0.6823344230651855, 0.5961512327194214, 0.6760584115982056, 0.617463231086731, 0.6380019783973694, 0.7444761991500854, 0.6397220492362976, 0.6989168524742126, 0.7521547675132751, 0.6208827495574951, 0.6652146577835083, 0.6700727939605713, 0.659292459487915, 0.7057793140411377, 0.6383203864097595, 0.6837741732597351, 0.6671833992004395, 0.7008020281791687, 0.616143524646759, 0.6532465815544128, 0.657339870929718, 0.7363139986991882, 0.6468941569328308, 0.6117860078811646, 0.5990237593650818, 0.6783191561698914, 0.6360528469085693, 0.6345322728157043, 0.6865234375, 0.6400372385978699, 0.6243235468864441, 0.6462892889976501, 0.6512463092803955, 0.6553274989128113, 0.6602928638458252, 0.7491059303283691, 0.6503258943557739, 0.7057384848594666, 0.6953710913658142, 0.6812817454338074, 0.6994805932044983, 0.6590659022331238, 0.6421510577201843, 0.6694273352622986, 0.6144805550575256, 0.6249997615814209, 0.6602135300636292, 0.6522939801216125, 0.6585564613342285, 0.6429365277290344, 0.7184564471244812, 0.6807112097740173, 0.6856848001480103, 0.6876852512359619, 0.6568189859390259, 0.6676694750785828, 0.5917350649833679, 0.6702190041542053, 0.7044232487678528, 0.7227626442909241, 0.6759572625160217, 0.6460278034210205, 0.7068462371826172, 0.6295360922813416, 0.6319908499717712, 0.6349478960037231, 0.690367579460144, 0.6340779066085815, 0.6432715654373169, 0.6759572625160217, 0.667309045791626, 0.6793810129165649, 0.6515007019042969, 0.6575543284416199, 0.6283260583877563, 0.6649893522262573, 0.6285700798034668, 0.6976316571235657, 0.6548735499382019, 0.6350835561752319, 0.6312580704689026, 0.6339262127876282, 0.6649426817893982, 0.701200008392334, 0.711179792881012, 0.7433462738990784, 0.6131902933120728, 0.6490651369094849, 0.6879072189331055, 0.7091970443725586, 0.6543079614639282, 0.6348268389701843, 0.6568189859390259, 0.6599550247192383, 0.6751139760017395, 0.6549826264381409, 0.6822446584701538, 0.6608454585075378, 0.6644446849822998, 0.653449535369873, 0.6766218543052673, 0.7312255501747131, 0.6570013761520386, 0.6751335859298706, 0.6673609614372253, 0.625342607498169, 0.6877955198287964, 0.6510226726531982, 0.6567124128341675, 0.6588916182518005, 0.6465886235237122, 0.6526815891265869, 0.6464427709579468, 0.665854275226593, 0.6418589949607849, 0.6526815891265869, 0.6293405890464783, 0.6458804607391357, 0.704334020614624, 0.7020584940910339, 0.6525671482086182, 0.6728265285491943, 0.6189823150634766, 0.6480981707572937, 0.6558048129081726, 0.632102370262146, 0.6796478629112244, 0.5949907898902893, 0.5832234025001526, 0.5910975933074951, 0.6269012093544006, 0.6343388557434082, 0.6761629581451416, 0.643068790435791, 0.6715250611305237, 0.6612242460250854, 0.6183096766471863, 0.6878883242607117, 0.7372968792915344, 0.5917409062385559, 0.7148704528808594, 0.6104210019111633, 0.6924881935119629, 0.6787004470825195, 0.6303306818008423, 0.6234156489372253, 0.6946374177932739, 0.7002120018005371, 0.6193532943725586, 0.6463236212730408, 0.6885539293289185, 0.6015020608901978, 0.729037344455719, 0.6252518892288208, 0.6458804607391357, 0.6330650448799133, 0.698819637298584, 0.6444602012634277, 0.6684170365333557, 0.6688165664672852, 0.6404922008514404, 0.7640447020530701, 0.6557332873344421, 0.695990264415741, 0.6428983807563782, 0.6558851003646851, 0.708078920841217, 0.5945144295692444, 0.6606470942497253, 0.6691640615463257, 0.7559133172035217, 0.6410924792289734, 0.6145898699760437, 0.6838846802711487, 0.6178339123725891, 0.6455137133598328, 0.6528797745704651, 0.6244031190872192, 0.6291185617446899, 0.689092755317688, 0.6677843332290649, 0.6806347966194153, 0.6878015995025635, 0.7185097336769104, 0.6791268587112427, 0.6180363297462463, 0.6522939801216125, 0.6729865670204163, 0.7151459455490112, 0.6553012132644653, 0.6036092042922974, 0.6549221873283386, 0.6728427410125732, 0.6413303017616272, 0.6082441806793213, 0.6366252899169922, 0.6094024777412415, 0.6196913719177246, 0.6174681782722473, 0.6258797645568848, 0.6011547446250916, 0.5765968561172485, 0.7124311327934265, 0.6176320314407349, 0.6894940733909607, 0.6821589469909668, 0.6082441806793213, 0.6693901419639587, 0.6298602223396301, 0.6472539305686951, 0.6924416422843933, 0.7000283002853394, 0.6052583456039429, 0.6488882899284363, 0.6533717513084412, 0.6488266587257385, 0.6400451064109802, 0.6687440276145935, 0.6856399774551392, 0.6462348103523254, 0.7328572273254395, 0.6413873434066772] | bert-base-multilingual-cased_L9_no-idf_version=0.3.12(hug_trans=4.40.1) |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
DesignITlab/Llama_Brutalism_eng
|
DesignITlab
| 2024-05-08T12:55:47Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-08T10:59:19Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.