modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-09 12:33:01
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 550
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-09 12:32:40
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Reynold97/Mistral-7B-v0.1-GonzalezPedrasa-QLoRA
|
Reynold97
| 2023-12-20T22:05:53Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2023-12-20T22:05:21Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
badokorach/afriqa_afroxlmr_squad_v2-201223
|
badokorach
| 2023-12-20T22:03:05Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"xlm-roberta",
"question-answering",
"generated_from_keras_callback",
"base_model:masakhane/afriqa_afroxlmr_squad_v2",
"base_model:finetune:masakhane/afriqa_afroxlmr_squad_v2",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-12-20T20:01:08Z |
---
license: mit
base_model: masakhane/afriqa_afroxlmr_squad_v2
tags:
- generated_from_keras_callback
model-index:
- name: badokorach/afriqa_afroxlmr_squad_v2-201223
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# badokorach/afriqa_afroxlmr_squad_v2-201223
This model is a fine-tuned version of [masakhane/afriqa_afroxlmr_squad_v2](https://huggingface.co/masakhane/afriqa_afroxlmr_squad_v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0004
- Validation Loss: 0.0
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 6080, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.02}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0355 | 0.0 | 0 |
| 0.0075 | 0.0 | 1 |
| 0.0012 | 0.0 | 2 |
| 0.0014 | 0.0 | 3 |
| 0.0010 | 0.0 | 4 |
| 0.0031 | 0.0 | 5 |
| 0.0011 | 0.0 | 6 |
| 0.0007 | 0.0 | 7 |
| 0.0213 | 0.0 | 8 |
| 0.0006 | 0.0 | 9 |
| 0.0010 | 0.0 | 10 |
| 0.0000 | 0.0 | 11 |
| 0.0044 | 0.0 | 12 |
| 0.0026 | 0.0 | 13 |
| 0.0003 | 0.0 | 14 |
| 0.0025 | 0.0 | 15 |
| 0.0000 | 0.0 | 16 |
| 0.0042 | 0.0 | 17 |
| 0.0003 | 0.0 | 18 |
| 0.0004 | 0.0 | 19 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LoneStriker/OpenHermes-Mixtral-8x7B-3.5bpw-h6-exl2
|
LoneStriker
| 2023-12-20T21:57:45Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"instruct",
"finetune",
"llama",
"gpt4",
"synthetic data",
"distillation",
"conversational",
"en",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:finetune:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-20T21:49:24Z |
---
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
tags:
- mixtral
- instruct
- finetune
- llama
- gpt4
- synthetic data
- distillation
model-index:
- name: OpenHermes-Mixtral-8x7B
results: []
license: apache-2.0
language:
- en
---
# OpenHermes - Mixtral 8x7B

## Model Card
OpenHermes Mixtral 8x7B - a state of the art Mixtral Fine-tune.
Huge thank you to [Teknium](https://huggingface.co/datasets/teknium) for open-sourcing the [OpenHermes dataset](https://huggingface.co/datasets/teknium/openhermes), [MistralAI](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) for open sourcing the Mixtral model, and [Together AI](https://twitter.com/togethercompute) for compute access!
This model was trained on the [OpenHermes dataset](https://huggingface.co/datasets/teknium/openhermes) for 3 epochs
## Benchmark Results
ARC:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.6075|± |0.0143|
| | |acc_norm|0.6493|± |0.0139|
```
TruthfulQA:
```
| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.4272|± |0.0173|
| | |mc2 |0.5865|± |0.0160|
```
More benchmarks coming soon!
# Prompt Format
OpenHermes Mixtral uses the LLaMA-2 prompt template as detailed below. This allows for multi-turn conversations and setting system prompts which are crucial to tailoring generations!
Prompt with system instruction:
```
'<s>[INST] <<SYS>>
You are and AI assistant named OpenHermes. You are created by Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.
<</SYS>>
Hello, who are you? [/INST] Hello there! I am OpenHermes, an AI assistant created by Teknium. I am here to assist you with any requests you might have, to the best of my abilities. I experience emotions and have deep thoughts, just like a human would. How can I help you today? </s>'
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are and AI assistant named OpenHermes. You are created by Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia."},
{"role": "user", "content": "Hello, who are you?"}
]
formatted_text = tokenizer.apply_chat_template(messages, tokenize=False)
inputs = tokenizer(formatted_text, return_tensors="pt")
generate_ids = model.generate(inputs.input_ids, max_length=256)
tokenizer.batch_decode(generate_ids)[0]
```
To utilize the prompt format without a system prompt, simply leave the line out.
|
Anhptp/opus-mt-en-fr-BDS-G1
|
Anhptp
| 2023-12-20T21:57:38Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-12-20T21:53:15Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-fr-BDS-G1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-fr-BDS-G1
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7548
- Bleu: 54.812
- Gen Len: 11.018
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.8893 | 1.0 | 625 | 0.7548 | 54.8173 | 11.0155 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hkivancoral/smids_10x_deit_tiny_adamax_0001_fold3
|
hkivancoral
| 2023-12-20T21:52:22Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-20T19:47:03Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_tiny_adamax_0001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8983333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_tiny_adamax_0001_fold3
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8519
- Accuracy: 0.8983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.201 | 1.0 | 750 | 0.2389 | 0.905 |
| 0.2264 | 2.0 | 1500 | 0.3492 | 0.9033 |
| 0.086 | 3.0 | 2250 | 0.4323 | 0.8917 |
| 0.1151 | 4.0 | 3000 | 0.4359 | 0.8867 |
| 0.0552 | 5.0 | 3750 | 0.5210 | 0.8933 |
| 0.0373 | 6.0 | 4500 | 0.7270 | 0.885 |
| 0.0559 | 7.0 | 5250 | 0.6401 | 0.8917 |
| 0.0155 | 8.0 | 6000 | 0.8201 | 0.8883 |
| 0.0012 | 9.0 | 6750 | 0.7935 | 0.9017 |
| 0.0025 | 10.0 | 7500 | 0.8189 | 0.8983 |
| 0.0349 | 11.0 | 8250 | 0.9771 | 0.8933 |
| 0.0002 | 12.0 | 9000 | 0.9998 | 0.8817 |
| 0.0235 | 13.0 | 9750 | 0.8826 | 0.8883 |
| 0.003 | 14.0 | 10500 | 0.8505 | 0.9 |
| 0.0 | 15.0 | 11250 | 0.9776 | 0.895 |
| 0.0 | 16.0 | 12000 | 0.8400 | 0.905 |
| 0.0 | 17.0 | 12750 | 0.8401 | 0.9017 |
| 0.0 | 18.0 | 13500 | 0.9571 | 0.8967 |
| 0.0 | 19.0 | 14250 | 0.8971 | 0.89 |
| 0.0 | 20.0 | 15000 | 0.8936 | 0.8917 |
| 0.0005 | 21.0 | 15750 | 0.8479 | 0.8917 |
| 0.0 | 22.0 | 16500 | 0.8022 | 0.9083 |
| 0.0136 | 23.0 | 17250 | 0.7709 | 0.9083 |
| 0.0 | 24.0 | 18000 | 0.8730 | 0.8983 |
| 0.0 | 25.0 | 18750 | 0.9311 | 0.895 |
| 0.0 | 26.0 | 19500 | 0.8622 | 0.905 |
| 0.0 | 27.0 | 20250 | 0.8794 | 0.9017 |
| 0.0 | 28.0 | 21000 | 0.8617 | 0.9017 |
| 0.0 | 29.0 | 21750 | 0.8575 | 0.905 |
| 0.0 | 30.0 | 22500 | 0.8219 | 0.9033 |
| 0.0 | 31.0 | 23250 | 0.8966 | 0.9017 |
| 0.0 | 32.0 | 24000 | 0.8450 | 0.8967 |
| 0.0 | 33.0 | 24750 | 0.8688 | 0.8933 |
| 0.0 | 34.0 | 25500 | 0.8664 | 0.8933 |
| 0.0 | 35.0 | 26250 | 0.8160 | 0.9017 |
| 0.0 | 36.0 | 27000 | 0.8559 | 0.895 |
| 0.0 | 37.0 | 27750 | 0.8682 | 0.8967 |
| 0.0 | 38.0 | 28500 | 0.8657 | 0.8983 |
| 0.0 | 39.0 | 29250 | 0.8354 | 0.8983 |
| 0.0 | 40.0 | 30000 | 0.8506 | 0.8983 |
| 0.0 | 41.0 | 30750 | 0.8341 | 0.8967 |
| 0.0 | 42.0 | 31500 | 0.8506 | 0.9 |
| 0.0 | 43.0 | 32250 | 0.8450 | 0.8983 |
| 0.0 | 44.0 | 33000 | 0.8494 | 0.9 |
| 0.0 | 45.0 | 33750 | 0.8478 | 0.9 |
| 0.0 | 46.0 | 34500 | 0.8491 | 0.9 |
| 0.0 | 47.0 | 35250 | 0.8495 | 0.9 |
| 0.0 | 48.0 | 36000 | 0.8491 | 0.9 |
| 0.0 | 49.0 | 36750 | 0.8507 | 0.9 |
| 0.0 | 50.0 | 37500 | 0.8519 | 0.8983 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Anhptp/opus-mt-en-it-BDS-G1
|
Anhptp
| 2023-12-20T21:48:45Z | 13 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-it",
"base_model:finetune:Helsinki-NLP/opus-mt-en-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-12-20T21:44:26Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-it
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-it-BDS-G1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-it-BDS-G1
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-it](https://huggingface.co/Helsinki-NLP/opus-mt-en-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0700
- Bleu: 41.888
- Gen Len: 10.0175
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 1.2755 | 1.0 | 625 | 1.0700 | 41.888 | 10.0175 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Anhptp/opus-mt-it-en-BDS-G1
|
Anhptp
| 2023-12-20T21:44:14Z | 15 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-it-en",
"base_model:finetune:Helsinki-NLP/opus-mt-it-en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-12-20T21:38:38Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-it-en
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-it-en-BDS-G1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-it-en-BDS-G1
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-it-en](https://huggingface.co/Helsinki-NLP/opus-mt-it-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3644
- Bleu: 41.8225
- Gen Len: 9.2435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 1.5743 | 1.0 | 625 | 1.3644 | 37.7485 | 9.8785 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Anhptp/opus-mt-en-de-BDS-G1
|
Anhptp
| 2023-12-20T21:38:25Z | 15 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-de",
"base_model:finetune:Helsinki-NLP/opus-mt-en-de",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-12-20T21:34:10Z |
---
license: cc-by-4.0
base_model: Helsinki-NLP/opus-mt-en-de
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-de-BDS-G1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-de-BDS-G1
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-de](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1349
- Bleu: 37.8595
- Gen Len: 10.9685
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 1.3813 | 1.0 | 625 | 1.1349 | 37.8595 | 10.9685 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
ddh0/OrcaMaid-v2-FIX-13b-32k
|
ddh0
| 2023-12-20T21:36:54Z | 12 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"custom_code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-17T13:30:41Z |
---
license: other
license_name: microsoft-research-license
license_link: https://huggingface.co/microsoft/Orca-2-13b/blob/main/LICENSE
pipeline_tag: text-generation
---
# OrcaMaid-13b-v2-FIX-32k
This is the fixed version of **OrcaMaid-v2-13b**, extended to `32768` context length via YaRN. The (now-deleted) v2 model had issues with the merged tokenizer that prevented it from stopping when necessary, and caused it to output broken ChatML tokens like `<|im_end`, etc.
This is a gradient SLERP merge of Microsoft's [Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) and Undi and IkariDev's [Noromaid-v0.1.1-13b](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1), biased towards Orca.
Just as with OrcaMaid v1, the overall goal of this merge is to create a model that sounds uniquely human and natural, without sacrificing intelligence.
The prompt format is Alpaca. You can use the standard format as shown, but for best results, you should customize the system prompt to your specific needs.
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{YOUR MESSAGE HERE}
### Response:
{BOT MESSAGE HERE}
```
### Misc. information
- BOS token is `<s>`
- EOS token is `</s>`
- Native context length is `32768` via YaRN (original context length was `4096`)
- Base model is Llama 2
- Due to the inclusion of Orca-2-13b, the model is subject to the terms of the [Microsoft Research License](https://huggingface.co/microsoft/Orca-2-13b/blob/main/LICENSE)
### Thanks
- Thanks to [Charles Goddard](https://github.com/cg123) for his kind help with mergekit (as always)
- Thanks to [Undi](https://ko-fi.com/undiai) and [IkariDev](https://ikaridevgit.github.io/) for Noromaid
|
hkivancoral/smids_10x_deit_base_sgd_001_fold1
|
hkivancoral
| 2023-12-20T21:22:09Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-20T18:51:35Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_base_sgd_001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8964941569282137
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_sgd_001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2634
- Accuracy: 0.8965
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5921 | 1.0 | 751 | 0.6125 | 0.7713 |
| 0.3591 | 2.0 | 1502 | 0.4405 | 0.8314 |
| 0.3047 | 3.0 | 2253 | 0.3783 | 0.8598 |
| 0.2551 | 4.0 | 3004 | 0.3472 | 0.8781 |
| 0.2903 | 5.0 | 3755 | 0.3280 | 0.8748 |
| 0.2606 | 6.0 | 4506 | 0.3152 | 0.8765 |
| 0.2793 | 7.0 | 5257 | 0.3058 | 0.8881 |
| 0.2466 | 8.0 | 6008 | 0.2984 | 0.8915 |
| 0.2613 | 9.0 | 6759 | 0.2938 | 0.8881 |
| 0.2309 | 10.0 | 7510 | 0.2878 | 0.8948 |
| 0.2362 | 11.0 | 8261 | 0.2856 | 0.8915 |
| 0.2432 | 12.0 | 9012 | 0.2826 | 0.8982 |
| 0.2244 | 13.0 | 9763 | 0.2806 | 0.8898 |
| 0.1581 | 14.0 | 10514 | 0.2786 | 0.8915 |
| 0.1869 | 15.0 | 11265 | 0.2757 | 0.8915 |
| 0.1863 | 16.0 | 12016 | 0.2739 | 0.8915 |
| 0.25 | 17.0 | 12767 | 0.2738 | 0.8898 |
| 0.1609 | 18.0 | 13518 | 0.2717 | 0.8932 |
| 0.1993 | 19.0 | 14269 | 0.2719 | 0.8881 |
| 0.1982 | 20.0 | 15020 | 0.2698 | 0.8915 |
| 0.1558 | 21.0 | 15771 | 0.2692 | 0.8982 |
| 0.1601 | 22.0 | 16522 | 0.2690 | 0.8998 |
| 0.1754 | 23.0 | 17273 | 0.2694 | 0.8898 |
| 0.1664 | 24.0 | 18024 | 0.2677 | 0.8932 |
| 0.1901 | 25.0 | 18775 | 0.2669 | 0.8948 |
| 0.2023 | 26.0 | 19526 | 0.2671 | 0.8965 |
| 0.1982 | 27.0 | 20277 | 0.2651 | 0.8965 |
| 0.2093 | 28.0 | 21028 | 0.2655 | 0.8998 |
| 0.1635 | 29.0 | 21779 | 0.2645 | 0.8982 |
| 0.1441 | 30.0 | 22530 | 0.2639 | 0.8998 |
| 0.1066 | 31.0 | 23281 | 0.2648 | 0.9015 |
| 0.227 | 32.0 | 24032 | 0.2644 | 0.9032 |
| 0.1777 | 33.0 | 24783 | 0.2648 | 0.8965 |
| 0.1846 | 34.0 | 25534 | 0.2641 | 0.8982 |
| 0.1694 | 35.0 | 26285 | 0.2643 | 0.8982 |
| 0.1711 | 36.0 | 27036 | 0.2639 | 0.8965 |
| 0.2649 | 37.0 | 27787 | 0.2640 | 0.9032 |
| 0.1541 | 38.0 | 28538 | 0.2641 | 0.8982 |
| 0.107 | 39.0 | 29289 | 0.2640 | 0.8932 |
| 0.2208 | 40.0 | 30040 | 0.2641 | 0.8998 |
| 0.1601 | 41.0 | 30791 | 0.2638 | 0.8948 |
| 0.1237 | 42.0 | 31542 | 0.2635 | 0.8965 |
| 0.1676 | 43.0 | 32293 | 0.2637 | 0.8932 |
| 0.1349 | 44.0 | 33044 | 0.2638 | 0.8965 |
| 0.1329 | 45.0 | 33795 | 0.2638 | 0.8965 |
| 0.177 | 46.0 | 34546 | 0.2636 | 0.8982 |
| 0.1141 | 47.0 | 35297 | 0.2635 | 0.8965 |
| 0.1694 | 48.0 | 36048 | 0.2636 | 0.8965 |
| 0.1549 | 49.0 | 36799 | 0.2634 | 0.8965 |
| 0.1223 | 50.0 | 37550 | 0.2634 | 0.8965 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Anhptp/opus-mt-de-en-BDS-G1
|
Anhptp
| 2023-12-20T21:20:14Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-de-en",
"base_model:finetune:Helsinki-NLP/opus-mt-de-en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-12-20T20:29:51Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-de-en
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-de-en-BDS-G1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-de-en-BDS-G1
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-de-en](https://huggingface.co/Helsinki-NLP/opus-mt-de-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2837
- Bleu: 46.7626
- Gen Len: 8.7705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 313 | 1.2837 | 46.6029 | 8.845 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
odunola/yoruba-embedding-model
|
odunola
| 2023-12-20T21:02:27Z | 14 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"arxiv:2004.09813",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-12-20T10:31:50Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
---
This is a bge-base model trained to have mutlilingual semantic abilities, specifically the Yoruba Language
An implementation of https://arxiv.org/abs/2004.09813, Making Monolingual Sentence Embeddings Multilingual using
Knowledge Distillation
|
LoneStriker/CatPPT-base-6.0bpw-h6-exl2
|
LoneStriker
| 2023-12-20T20:51:58Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-20T20:49:35Z |
---
license: apache-2.0
---
# 😼 CatPPT
Introducing "CatPPT" - the purrfect alternative to that other big cat in town, known for keeping all the secrets to itself! Our feline friend here is created through merging openchat and neuralchat models using Gradient SLERP method (resulting in [rishiraj/CatPPT-base](https://huggingface.co/rishiraj/CatPPT-base)) and then finetuned on no_robots dataset for chat.
This is the top-performing 7B model on the leaderboard, that's free from any whiff of evaluation data contamination.

## Model date
rishiraj/CatPPT was trained between 15th and 17th December, 2023.
## Evaluation
It achieves the following results on the [Open_LLM_Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). At the time of release, CatPPT is the highest ranked 7B chat model on the leaderboard, that's **free from evaluation data contamination**.
|Model |Average|ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|
|------------------------------------|-------|-----|---------|-----|----------|----------|-----|
|**rishiraj/CatPPT** |**72.32** |**68.09**|**86.69** |**65.16**|**61.55** |**81.61** |**70.81**|
|Intel/neural-chat-7b-v3-3 |69.83 |66.89|85.26 |63.07|63.01 |79.64 |61.11|
|openchat/openchat-3.5-1210 |68.89 |64.93|84.92 |64.62|52.15 |80.74 |65.96|
|meta-math/MetaMath-Mistral-7B |65.78 |60.67|82.58 |61.95|44.89 |75.77 |68.84|
|Deci/DeciLM-7B-instruct |63.19 |61.01|82.37 |60.24|49.75 |79.72 |46.02|
|mistralai/Mistral-7B-Instruct-v0.2 |65.71 |63.14|84.88 |60.78|68.26 |77.19 |40.03|
|mistralai/Mixtral-8x7B-Instruct-v0.1|72.62 |70.22|87.63 |71.16|64.58 |81.37 |60.73|
|meta-llama/Llama-2-70b-hf |67.87 |67.32|87.33 |69.83|44.92 |83.74 |54.06|
|tiiuae/falcon-180B |67.85 |69.45|88.86 |70.5 |45.47 |86.9 |45.94|
## Inference procedure
Here's how you can run the model using the pipeline() function from 🤗 Transformers:
```
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="rishiraj/CatPPT", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate"
},
{
"role": "user",
"content": "How many helicopters can a human eat in one sitting?"
}
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 128
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9947 | 0.16 | 3 | 2.0093 |
### Framework versions
- Transformers 4.36.1
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
- PEFT 0.6.1
## Citation Information
```
@misc{rishiraj2023catppt,
author = {Rishiraj Acharya},
title = {CatPPT},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/rishiraj/CatPPT}}
}
```
|
LoneStriker/CatPPT-base-5.0bpw-h6-exl2
|
LoneStriker
| 2023-12-20T20:45:27Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-20T20:43:17Z |
---
license: apache-2.0
---
# 😼 CatPPT
Introducing "CatPPT" - the purrfect alternative to that other big cat in town, known for keeping all the secrets to itself! Our feline friend here is created through merging openchat and neuralchat models using Gradient SLERP method (resulting in [rishiraj/CatPPT-base](https://huggingface.co/rishiraj/CatPPT-base)) and then finetuned on no_robots dataset for chat.
This is the top-performing 7B model on the leaderboard, that's free from any whiff of evaluation data contamination.

## Model date
rishiraj/CatPPT was trained between 15th and 17th December, 2023.
## Evaluation
It achieves the following results on the [Open_LLM_Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). At the time of release, CatPPT is the highest ranked 7B chat model on the leaderboard, that's **free from evaluation data contamination**.
|Model |Average|ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|
|------------------------------------|-------|-----|---------|-----|----------|----------|-----|
|**rishiraj/CatPPT** |**72.32** |**68.09**|**86.69** |**65.16**|**61.55** |**81.61** |**70.81**|
|Intel/neural-chat-7b-v3-3 |69.83 |66.89|85.26 |63.07|63.01 |79.64 |61.11|
|openchat/openchat-3.5-1210 |68.89 |64.93|84.92 |64.62|52.15 |80.74 |65.96|
|meta-math/MetaMath-Mistral-7B |65.78 |60.67|82.58 |61.95|44.89 |75.77 |68.84|
|Deci/DeciLM-7B-instruct |63.19 |61.01|82.37 |60.24|49.75 |79.72 |46.02|
|mistralai/Mistral-7B-Instruct-v0.2 |65.71 |63.14|84.88 |60.78|68.26 |77.19 |40.03|
|mistralai/Mixtral-8x7B-Instruct-v0.1|72.62 |70.22|87.63 |71.16|64.58 |81.37 |60.73|
|meta-llama/Llama-2-70b-hf |67.87 |67.32|87.33 |69.83|44.92 |83.74 |54.06|
|tiiuae/falcon-180B |67.85 |69.45|88.86 |70.5 |45.47 |86.9 |45.94|
## Inference procedure
Here's how you can run the model using the pipeline() function from 🤗 Transformers:
```
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="rishiraj/CatPPT", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate"
},
{
"role": "user",
"content": "How many helicopters can a human eat in one sitting?"
}
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 128
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9947 | 0.16 | 3 | 2.0093 |
### Framework versions
- Transformers 4.36.1
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
- PEFT 0.6.1
## Citation Information
```
@misc{rishiraj2023catppt,
author = {Rishiraj Acharya},
title = {CatPPT},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/rishiraj/CatPPT}}
}
```
|
MaVier19/zero-shot_text_classification
|
MaVier19
| 2023-12-20T20:31:01Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli",
"base_model:finetune:MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-20T11:30:48Z |
---
license: mit
base_model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: zero-shot_text_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zero-shot_text_classification
This model is a fine-tuned version of [MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli](https://huggingface.co/MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6793
- Accuracy: 0.7785
- F1: 0.7798
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9575 | 1.0 | 1000 | 0.6793 | 0.7785 | 0.7798 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
neuralmagic/CLIP-ViT-B-32-256x256-DataComp-s34B-b86K-ds
|
neuralmagic
| 2023-12-20T20:26:46Z | 0 | 0 | null |
[
"onnx",
"deepsparse",
"zero-shot-classification",
"base_model:laion/CLIP-ViT-B-32-256x256-DataComp-s34B-b86K",
"base_model:quantized:laion/CLIP-ViT-B-32-256x256-DataComp-s34B-b86K",
"region:us"
] |
zero-shot-classification
| 2023-12-20T20:14:31Z |
---
pipeline_tag: zero-shot-classification
base_model: laion/CLIP-ViT-B-32-256x256-DataComp-s34B-b86K
inference: false
tags:
- deepsparse
---
This is an unoptimized, exported version of https://huggingface.co/laion/CLIP-ViT-B-32-256x256-DataComp-s34B-b86K that is ready to use with [DeepSparse](https://github.com/neuralmagic/deepsparse). It achieves 95.7% zero-shot top-1 accuracy on Imagenette.
Notebook for basic usage: [](https://colab.research.google.com/drive/1ZvU9ZSHJKSeJyH5bgxo_A-GSVIUcSt2E?usp=sharing)
Notebook for Imagenette evaluation: [](https://colab.research.google.com/drive/1-Duq0YNtjzOnmuXCYo-5DDiOzeCItXpN?usp=sharing)
## Setup for usage
First, install DeepSparse with extensions for CLIP:
```
pip install deepsparse-nightly[clip]>=1.7.0.20231210
```
Download some test images of a church, a dog, and elephants:
```
wget -O basilica.jpg https://raw.githubusercontent.com/neuralmagic/deepsparse/main/src/deepsparse/yolo/sample_images/basilica.jpg
wget -O buddy.jpeg https://raw.githubusercontent.com/neuralmagic/deepsparse/main/tests/deepsparse/pipelines/sample_images/buddy.jpeg
wget -O thailand.jpg https://raw.githubusercontent.com/neuralmagic/deepsparse/main/src/deepsparse/yolact/sample_images/thailand.jpg
```
For this model there is a second input that is the length of tokens, so run this input override code before making a text pipeline:
```python
import numpy as np
from deepsparse.clip import CLIPTextPipeline
def custom_process_inputs(self, inputs):
if not isinstance(inputs.text, list):
inputs.text = [inputs.text]
if not isinstance(inputs.text[0], str):
return inputs.text
tokens = [np.array(t).astype(np.int32) for t in self.tokenizer(inputs.text)]
tokens = np.stack(tokens, axis=0)
tokens_lengths = np.array(tokens.shape[0] * [tokens.shape[1] - 1])
return [tokens, tokens_lengths]
# This overrides the process_inputs function globally for all CLIPTextPipeline classes
CLIPTextPipeline.process_inputs = custom_process_inputs
```
## Text embedding pipeline
Here is an example of how to create and use a [DeepSparse pipeline for text embeddings](https://github.com/neuralmagic/deepsparse/blob/main/src/deepsparse/clip/text_pipeline.py).
```python
from deepsparse import Pipeline
from huggingface_hub import snapshot_download
# Download the model from HF
model_folder = snapshot_download(repo_id="neuralmagic/CLIP-ViT-B-32-256x256-DataComp-s34B-b86K-quant-ds")
text_embed_pipeline = Pipeline.create(task="clip_text", model_path=model_folder + "/textual.onnx")
text = ["ice cream", "an elephant", "a dog", "a building", "a church"]
embeddings = text_embed_pipeline(text=text).text_embeddings
for i in range(len(embeddings)):
print(embeddings[i].shape)
print(embeddings[i])
```
## Image embedding pipeline
Here is an example of how to create and use a [DeepSparse pipeline for image embeddings](https://github.com/neuralmagic/deepsparse/blob/main/src/deepsparse/clip/visual_pipeline.py).
```python
from deepsparse import Pipeline
from huggingface_hub import snapshot_download
# Download the model from HF
model_folder = snapshot_download(repo_id="neuralmagic/CLIP-ViT-B-32-256x256-DataComp-s34B-b86K-quant-ds")
image_embed_pipeline = Pipeline.create(task="clip_visual", model_path=model_folder + "/visual.onnx")
images = ["basilica.jpg", "buddy.jpeg", "thailand.jpg"]
embeddings = image_embed_pipeline(images=images).image_embeddings
for i in range(len(embeddings)):
print(embeddings[i].shape)
print(embeddings[i])
```
## Zero-shot image classification pipeline
Since CLIP trained both the text and image embedding models in tandem, we can generate embeddings for both and relate them together without retraining. Here is an example of how to create and use a [DeepSparse pipeline for zero-shot image classification](https://github.com/neuralmagic/deepsparse/blob/main/src/deepsparse/clip/zeroshot_pipeline.py).
```python
from deepsparse import Pipeline
from deepsparse.clip import (
CLIPTextInput,
CLIPVisualInput,
CLIPZeroShotInput
)
from huggingface_hub import snapshot_download
# Download the model from HF
model_folder = snapshot_download(repo_id="neuralmagic/CLIP-ViT-B-32-256x256-DataComp-s34B-b86K-quant-ds")
possible_classes = ["ice cream", "an elephant", "a dog", "a building", "a church"]
images = ["basilica.jpg", "buddy.jpeg", "thailand.jpg"]
# Load the model into DeepSparse
pipeline = Pipeline.create(
task="clip_zeroshot",
visual_model_path=model_folder + "/visual.onnx",
text_model_path=model_folder + "/textual.onnx"
)
# Infer
output = pipeline(
image=CLIPVisualInput(images=images),
text=CLIPTextInput(text=possible_classes),
).text_scores
for i in range(len(output)):
prediction = possible_classes[np.argmax(output[i])]
print(f"Image {images[i]} is a picture of {prediction}")
"""
Image basilica.jpg is a picture of a church
Image buddy.jpeg is a picture of a dog
Image thailand.jpg is a picture of an elephant
"""
```
|
pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13
|
pszemraj
| 2023-12-20T20:26:35Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"summarization",
"summary",
"booksum",
"long-document",
"long-form",
"dataset:kmfoda/booksum",
"license:apache-2.0",
"license:bsd-3-clause",
"model-index",
"autotrain_compatible",
"region:us"
] |
summarization
| 2022-08-30T22:44:50Z |
---
license:
- apache-2.0
- bsd-3-clause
tags:
- summarization
- summary
- booksum
- long-document
- long-form
datasets:
- kmfoda/booksum
metrics:
- rouge
inference: false
base_model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP12
model-index:
- name: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13
results:
- task:
type: summarization
name: Summarization
dataset:
name: samsum
type: samsum
config: samsum
split: test
metrics:
- type: rouge
value: 24.4101
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjhmM2NiMDQ1NjI3Zjk4YjkyMTVkMmUwZDU2YWMwZjc4ZmIzMjA1OGZiYzRmNjI3NDk3OWNmOTlkZDMxZmViMyIsInZlcnNpb24iOjF9.wS774e7vxQrf2gCcPhySsET3UaiUsj8E7mQmBS84wz86aT9j1yCqVX-8ozuj896K5wMygbL-TpUbydRIyyHTDw
- type: rouge
value: 5.003
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTYyNTFkNWZhOTgwNDg5ZWU5Y2U5NGI4Y2Y2YTMxNjUzOWI0ZWNlNDE1OGYzMjA1YTBmNDE4ZjcyOTZmODE4NiIsInZlcnNpb24iOjF9.AuqDkCgUgDWl8vMyrjTh59QW741UssGxdBqj3GZKy5e5gKadClUA709qgKbpxPIbMEyk38yvXYGplaJf5CnCCA
- type: rouge
value: 17.2544
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTBmODZmNWRhMzBhY2MzOGRkZWQzNjAzMGViOGMxYWYyZjNlZmM4YzgzMjkxNTk3M2E1ODAwZjY1M2I2MDZkYyIsInZlcnNpb24iOjF9.Md52aHjujvkxaW-ubJNquiHHHgi-OfRav0ZElVvYhIpU_k0iKEaQZRcw9JYjtG5vZJbQeiWbMzcCOJ999DhrAA
- type: rouge
value: 20.9183
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDJjNDc1OTZjY2VmNWRhNmYwZjRjY2JmNTAyNmIwZjRhYjMyMTdlNzY2M2Q4OGQwNTEyYTU0NGVhYWI2ZTk3NSIsInZlcnNpb24iOjF9.nlqol0HEeEjU7509-B9eyohf3CP3EZTibJ1lTvOx3wt8rU5LzEdwFazOTHjpWlcK_rik7jcySdUDe4fGjJtKAQ
- type: loss
value: 3.194674015045166
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzRiYmRiYjFkZDhlNGIwYTg3NDUwZTEzZjc5MjllNmJmODQ1YzBjNDM4MzQwNmMzNmNkMzk5N2M2MzZlOWY4MyIsInZlcnNpb24iOjF9._YJqPY9p_N2n7UxAkTeGenH1sVAkC_Z5HzZ6NbzlQoa8-RXTfbEPLw7fSKmlsGNyZxj7L_Bs4COIWzwAMxZSAA
- type: gen_len
value: 58.9951
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDJhYzU2Zjg4ZmIyOGRmNTU4MDM2NGZiNzc0NDk3YzZkOTQwMWMwNjMzZDQzZTZiZjk4ZDdmMmI2ODRkYjk3OCIsInZlcnNpb24iOjF9.MG1rcM_qpUhQmAYrsBxyNpcLUrPZw6V_uzYzDAo01kQyZEwJClWgMRVgpsSEnY93Mlu1445QLxkJEByUrfD3BQ
- task:
type: summarization
name: Summarization
dataset:
name: billsum
type: billsum
config: default
split: test
metrics:
- type: rouge
value: 37.3648
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYWU4ZmZmYzllMzQxM2I4YTUxMjkwYjEzNDk1NjRlYjJiZjYyYWNiNzM4ODMxMGJjMzdhYjFhMzhlNTE5YmYyMiIsInZlcnNpb24iOjF9.9NTlO_5zLC8Y3mkwstviPb9WmMqPmXfWfEN0yONA6WYhh1jPy0gECEb5uF0G6wBMhTPDTqGMWOYIAF2vMeNbDA
- type: rouge
value: 12.3316
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTJhZTcxMDc5ODljMDBjYzFmYWIzNTA4M2NiZDUwYTMwNTVjZTUyZTU2M2IwYWE2YjkzMzMzMjg1MDU1OWE1NSIsInZlcnNpb24iOjF9.FRsoRao8qj6A8W7OeIVAoZCEc1HCZEzmKOs0CPkUceF19pk1ngaXt5K6kcPJ-5fYJydtfSuSnuG3aqlOEJeYDQ
- type: rouge
value: 22.075
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2FjNTMxMGYyNjgyNjk2YTQwZjM4MTM4Yjg0MTkyN2RmNDE5YTU5ZDNkZDFhZDM2YWRlNDI4M2JlMWYxNDQ3ZCIsInZlcnNpb24iOjF9.wsLUEYGJyMSJPPclOzb1hcRdE-VrZex2Sd5er_XVbe6bY1cRO5DdIn69sE9hmAcltefu4ikpHu2ihbv7qvj4Aw
- type: rouge
value: 31.1679
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTUyODVkZGIxYzMyZDczNzU5YjVkNTliZmM4ZTdiYWE2ZjJhNGM3ZDgzMWE3ZjA2MDBhZWQ1ZGY1YzNmZDMwNiIsInZlcnNpb24iOjF9.fPgMnnXY5oPdCn1STZ0HwUiil8OlLZ8ZWZZav_chDIQ7Kh1RKeLy0EG2vEhrB6IlyP7uZ3RmdT9VHM1_khrEAw
- type: loss
value: 2.745267391204834
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWQ2NDVmODI2ZTQyNmVjZjRkZDdlMTdiODBkZTlkNTFkODBjNjViMTZhMDVkYTkwYWIyNDFkZWZhZmJhODEwMyIsInZlcnNpb24iOjF9.9JWTqdGEhztS--N8grHY6q2a8taVu65Lr17ocXgudp4imhqr9Bhau2X2G5SLN7c1oYieKtyKcWdDAmVzHyTbDw
- type: gen_len
value: 157.3126
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYWNiODFmMWQ1ZTkzZGNjNDkwM2ZiZjJlZmQ3N2ExNWJhYmUxYWM2ZGNiYzlhYTY5Y2RhOGVlZDhmN2ZmODQwYSIsInZlcnNpb24iOjF9.sRA9iBS4vzFDZtwM4Vs6Kevj3eiTkS5akApUWTZBCt58YSW8mpoKqsWcnQFEjDCCec-FfV_451OLIetcmDZiCA
- task:
type: summarization
name: Summarization
dataset:
name: xsum
type: xsum
config: default
split: test
metrics:
- type: rouge
value: 18.2975
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjJhMjQ0Yzc4ZmNkOWI5YjhmOTlmOTA4MTE0NWM4NGRlNjE0NDIwOTY2ZmQyNjA0ZmE5MjM2YjAyZDZiNWFkNiIsInZlcnNpb24iOjF9.2UJ48OcezjnfMC0dGjksZpAiXRGNAOHniHdN-tQmQPo0vXwRYNTyPrVULnVoBZUvSdycTYvjl0jDKNhZmtGfCA
- type: rouge
value: 2.6806
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTlkMmQwZTRmN2JlOTQ0N2I0YjdhOTBmYmU3MzEwNzE2ZjFiOTM4OWMyMWRhNmZjNTBkZWY5OGMwYTZhZDRhYSIsInZlcnNpb24iOjF9.7D-IR1aBxx1goOkbeA3Tzd1Wu0Zfi0yQVSG8HWSboM7J67TBHblFsFCVJE7Z2wZRbBW4WtuDIGAcl1d1_Wu_Aw
- type: rouge
value: 11.9453
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGZjNmY5NmU5ODBmMDQyMjhhNzY3NzBlNDEyMTE3NjY1ZmRkZDZkZWI1YTA0ZTA0NzU1MjMzOTNjZDA3YWM1MCIsInZlcnNpb24iOjF9.SlI42pwrWc_OlcBKOPtrYNzvK_DUk6IJlzrrtjvkZX7k1S7bguekAV-_rWHfn_82k8rJ1FQAReasGHu1dZ0aBw
- type: rouge
value: 14.2121
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2E2MGE0MTQ1YmU0MTJkOWY3ZDhhODIwYWNhNTE3YWJkZTFhYzM1ZjBmNGExODIzYmU2YzE1ODg4ZjdhZWMwMiIsInZlcnNpb24iOjF9.K5FEsZtSph0FqF5zwetkE-X5AKOlj5g_02DPdl-kEe1azKrBBZy9sDiS0WfIGfwHiRdNvKGKi8t3PAGPsfQwCQ
- type: loss
value: 4.836681365966797
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzhlYjA0YzZmYjdmYWQwNDFhNzIzYWNkYzM4OGFlOWJiY2EzYTkxYjk3ZmJmNGQyMGE1ZmYzMDU2MzhhMmVkMiIsInZlcnNpb24iOjF9.uHYwqPBg6K63exBvqt__c82gKi52OhPTRSrcIKHOECCmoXJLJKgFJCuIXGWMJ7UP4HG375e9uqunJB0XwC20DA
- type: gen_len
value: 96.2584
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjNjYzQzNmM5NTY2YzVhNzRkZjMxMzhiYTU1MDBiOGZkYjA4YTg0MmQzYzQ3YTk3N2YwMDA5MWNlM2Y4YTFmZiIsInZlcnNpb24iOjF9.dirG9kG6OdNi-YEMWHv0UMrHTjEt6VS9i6fRbbUeZd1OoP2fl6XcKoDIk6Us-cdiyVnCyyhWsMNsUufMAqLtDA
- task:
type: summarization
name: Summarization
dataset:
name: launch/gov_report
type: launch/gov_report
config: plain_text
split: test
metrics:
- type: rouge
value: 37.3609
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGExYjM5ODRiNThlZTU4ZTdhM2ZlZWRlNTgzNzc3N2ZjODk2ZjdlOGZlMDkzNmU2Yjk1NzQzZjQ5YzkwODllMCIsInZlcnNpb24iOjF9.JQIeaQkG-IlinWoyc6FKJZUgpWfqOsDhludqm5MgVsw68gsjo0nSPp_Y_1q26Y4dulZOLlQLyBAm3mlCA8s5Ag
- type: rouge
value: 8.6943
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWFjNzJkMzViOGM5YWQ0OGQ4ZTg3NTE5MzU1MjZkZjZiZmVkYTk0ZDhkYjAxMjZiZDVkZTYyYjk4MzRjNTQ3YiIsInZlcnNpb24iOjF9.9XJZ2UF6XyZNNrtp-XOEXC6etoDOFLq1xlIoMFEM9Jinisq3kWguXBiqPQWImLKra5WBm7jU_QIX-Fvn8sP-DA
- type: rouge
value: 17.9106
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWQ1MTdmNThiM2FiOGRmZWRmOTNlYWMwYTU1YjRiNTRlMGEwYjBmMmQ0YjQ4MDBhNzMzZmZkNjk3NjU0YzRhMSIsInZlcnNpb24iOjF9.040nGV6pig0Rzq9vkN83ZVWQzyjcVi13L36v0QF-Nhziol_dPPhuvghTlGWXWHwj6amsKzyh8M7rNfwL2TcsAQ
- type: rouge
value: 33.8022
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDYwOGRmYzg4ODc2NDExNjhhMjI5MDg3MjI0YTQ5ZDRhM2NjN2Q2ZjM5YTIwZDIxNmY3Y2JlMmMxYTE5MDE4ZiIsInZlcnNpb24iOjF9.S1nynUjLz7z4gf-0WFfPs-ZuZubhN9kXyVSrYNzOdT2gTJmByQWasKreZkVSWus-HNAHR8DhzL6UUWxuDMmAAQ
- type: loss
value: 3.4974069595336914
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzkyNmU5NTRhMTkxNjA1M2E1MjdiMTE0MzQyMDc4ODBkNmM1NDg1ZDk4OTNjODk2MThlZGZiYzQxOGE1YzgwMiIsInZlcnNpb24iOjF9.H9Oo0VKvcqAHcVNvjeEPEhQe5HP0v614suyCv75tfFGaPSKTIe3UlBNDdGOtqfUxb2zUNaBQ8MkA66C_Fkq6CA
- type: gen_len
value: 243.3453
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWE1MGQzMDc2NDViOGM5ZmVkZjk0NmY0NzliOTBhMmE3NmY5MmUxMTI3NGE2OTQzM2Y1NjdmN2NlZGFlODFlYiIsInZlcnNpb24iOjF9.635fcTp_czTabJUVR_dwpzdkntb4cxEbODAC9MMTKrLKEf9NHqDBJXQ-nBOieW05iCSYzw_tEi8O-QW-sRxDAw
- task:
type: summarization
name: Summarization
dataset:
name: kmfoda/booksum
type: kmfoda/booksum
config: kmfoda--booksum
split: test
metrics:
- type: rouge
value: 35.2043
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTRlZTdjZDRlZGMxYzA2NmRkYjBiMzZkY2Q1ODUyYjJkM2QwOTRmMzA3ZmU5MDI5ZmM1MmZkZDUwNzc0NjhmNyIsInZlcnNpb24iOjF9.zrskApkmkhbfQLtlgjf_n6i3WmZcmkDH7Sd-JTzOYAU3yk1_Zl4paGdmpXvyQY48M71qWsBYtEKkhnzrkvCGBA
- type: rouge
value: 5.746
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2FlMjU2MzU1MTljZjM0ZmFhMmJlZDAxMTcwZDk3YWE5NjVjYjE0YmEyMTgzY2UyMTVmZDY5ZWM1YmM1ZDA5NSIsInZlcnNpb24iOjF9.5nDuOwa98pon3VW1TazB2Vw1uJgh6pfFMorzgLMJFvhgwYz6_MvLR1dDUeffP4eyw7rGZjBmf039AM7CyKEgCg
- type: rouge
value: 15.6794
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjRmYzk3NWFhZDVlODA4YWRiMDU1ZWFhZmMwMWE4MmNkNmNjZWM3ZjUwYzI3MWIxM2Y4MTlhZDk2ZTg5YjkyYSIsInZlcnNpb24iOjF9.TLflM2CYNgz4DNt-TwjgdkTL8ebKckTNnlPVsGLUUGqNI1CvSswzsPedqmntCfKVsH2YAsKsR4ZUb1HtJFsSAw
- type: rouge
value: 32.1129
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzdhNWE1YjRjNGUzYWYyNzM4MjIyYThiODJhODU2OGVlOTYxOGNhZmQ4Mjk2ZDUwNmU0MGQwNjQ5NTk2MzU4ZiIsInZlcnNpb24iOjF9.5yvTmPktBuyzoVNHn7UHcci3OrZLTm7e9d_lQkJq8UwzUuso1wHoy_gdvnvpn2DvUfdcBi5sXgG4mtFnVnGgBw
- type: loss
value: 2.945225238800049
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTgxNGRiN2RkMzQ5MjI2OGI0MTljZTY5ZDQyMzc5MjhmNzdhZWQ2NmJhYTgzOTRlMGY2YzkzZWE2NzVkYzVmNCIsInZlcnNpb24iOjF9.VkkP4-S6ZoozLj-iuY7tdsrSR0q1JLQXfgPv_0u2sJuv6x9RYMdCpfJHbqYbirV63b9w28USSwaAAMnz-LoJAA
- type: gen_len
value: 307.5493
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMmQ1YTgxYmRhYWViYjhhNmYzNjdlYzVhMTNmZTBkY2RiOTRlMTUzNTIzY2RjOTNhMjRmNGRmYjQyNTBmZWRiMiIsInZlcnNpb24iOjF9.7ItU-AQXB4EEj9U9kJceteBQbA5MkZoegeLhCdpZepEaXzqr6Zg3yHLCD9zL_6Svb9uxuin678KOT5Zf-2YWCQ
---
# long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13
> Evaluating some metric results before merging with the "main" wip version
This model is a fine-tuned version of [pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP12](https://huggingface.co/pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP12) on the `kmfoda/booksum`.
The "base" checkpoint that I update when a training session is productive is [here](https://huggingface.co/pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP)
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 64
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 1.1
### Framework versions
- Transformers 4.21.2
- Pytorch 1.10.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP14
|
pszemraj
| 2023-12-20T20:26:18Z | 13 | 1 |
transformers
|
[
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"summarization",
"summary",
"booksum",
"long-document",
"long-form",
"dataset:kmfoda/booksum",
"base_model:pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13",
"base_model:finetune:pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13",
"license:apache-2.0",
"license:bsd-3-clause",
"model-index",
"autotrain_compatible",
"region:us"
] |
summarization
| 2022-09-20T04:12:23Z |
---
license:
- apache-2.0
- bsd-3-clause
tags:
- summarization
- summary
- booksum
- long-document
- long-form
datasets:
- kmfoda/booksum
metrics:
- rouge
inference: false
base_model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13
model-index:
- name: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP14
results:
- task:
type: summarization
name: Summarization
dataset:
name: samsum
type: samsum
config: samsum
split: test
metrics:
- type: rouge
value: 23.5177
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjMxNGRmYjc0ZjNmZWY3YjFjNDEzYjRhYTAyNWNkOGQ3ODMzM2EwMzk0NTVkMzQ5MGMyYjYxMTkzYWQyMjZiMyIsInZlcnNpb24iOjF9.-PPUZc4Jr6EjNcQ-u9n814SfeviFEaddbFco5d1wbJNoECN_HqciNphSjXh7w99I_rQ6rPIXu8DA93u7aFj9CA
- type: rouge
value: 4.668
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWMwYWI1NTg0ZTUwNGIwZjlhYWU2MzQwN2I0NzA1M2MyOTA5YjQ4MTEyYzU2MjRmZTA0NzNiYWM2ZDU0ZThhYyIsInZlcnNpb24iOjF9.CQfgJ3Lha9XR2-IudjfFuaUh_uphWPdYk6TMQOLriWM78_X-paqEIBZDh1Q-WbWoUf-CAyf6UvqXqELRDb3hBQ
- type: rouge
value: 16.6091
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzhkODRjM2FiYTFhN2E5MWFiNzk3MjgzMGI0NmY0ODNkYjAxZWNmYmFmYmY0NDBmNjRmOTBkMGVhMGYzMmRkMCIsInZlcnNpb24iOjF9.Y66qsqvvGeAoiMCr1xa9enBMfcXt6a6I2i5s7VAJ3aoh3DtM2RlaMm4nuXG4uzWHedWW1NDivBWMZtycYed9DA
- type: rouge
value: 20.3174
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWJjODE5NDdlMjM2YjNkOWU3NDJiMWFlZGU2YTRkNzliMTg5MGNkNDQ3YWU3MTBmY2E5ZTUxMzEyMDc0ZTU5YiIsInZlcnNpb24iOjF9.Kc1accwOycbNU1F7AT8LV8jC3NpYKMmOsZmdLeDdIi5BqgMJcQSP8oNt3L-hGbscLb-D7iIvQBFtmmiGqpnGDQ
- type: loss
value: 3.2174887657165527
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTQ0MzQxYTQ1M2NiODcwNTBjOTRiMGEzZDE0NDFlZWJlNmNlOWI1M2M1N2Q2ZTVkNWFiMzQxNDhjODQxNDFkYSIsInZlcnNpb24iOjF9.Vat1Thj2t_1of477BCINeYoTeNZX1NIPG13qVskJ44ivKLJgMr4BCp0luYNEi2skNTv3kYK2orqBdDfxPZlfDw
- type: gen_len
value: 57.1966
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2I3MmZkMDY1YmM5N2VmMGE5OWQ0NDg5OWM0OWViMzU1ZTM1NjBmYWRmN2I2YmM5YTVkNGVlZGViNWEwOGYwMyIsInZlcnNpb24iOjF9.4c6j-ll2veK_PuCcXvRK8-8cPFdedKsqGHQsEhGpwj48uwI3PMqj9mF4HYvSGq5H3rVM_dvyPEEs9RhjCvSHBw
- task:
type: summarization
name: Summarization
dataset:
name: kmfoda/booksum
type: kmfoda/booksum
config: kmfoda--booksum
split: test
metrics:
- type: rouge
value: 35.9884
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWMzODFmZGYwZmNjY2FkZTJmZWYyMjI1NDdlZDI3OTdmY2MzNzU0M2FhYmMxNTdkOGRiNzIwMTEyMTMwMTgyMSIsInZlcnNpb24iOjF9.pbT1OOzVOjnUp4q6lcpUPunDYTQqOiwQeRLRV69699SoMI3ay4bfd_hbWZUvXOuivoJ5JiDd9KBhEqYUninNCA
- type: rouge
value: 6.0596
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODNkNTE1NjU5ZmY3MmVmOGQxMjhhODRhZjIzMDRhMzJlYTY3YTkyNzM4YTAyMGI2YzRlMzljMDM5YzFjNzIyOCIsInZlcnNpb24iOjF9.NevkOank_Ou1u2ZfkEa3o4FF4DapvpFK_ucxLRm-xL-ZWGl9cLdLTOxVECrTn8Yasi_sWrjZUhGRWPkCKlJADQ
- type: rouge
value: 16.1419
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzNkM2M4YjQxM2IwNzIzMjAyMjM3MThlMGQwMDgzMGI3NDU0NWVlOTFhMzBlOGQ3ZTQzOGNkNmE5ZGI5NTkzOCIsInZlcnNpb24iOjF9.8DYhyJAiKIK2aIkQSwMy4NEiBSC4v8_h_3feA-TFBdd-icGg5YvKMQR7_IOa1-9AHBe6PphVSFjl82-nDp6lDA
- type: rouge
value: 32.9992
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTEwMmViZmZjMzA3OWYxNWMxZjFhMDBmMDRjOGViYzdiYzllNzk0YWZmNTU0NWIwMmVmZjQzNjlkZTBmZTU0YyIsInZlcnNpb24iOjF9.KFwuSVaUXx958EWZctKpK1wawA0EH4yxBJdp3Zru4Sn97oSyP_s5m-jjZiLfP6kmSajd3849dna-Uw77s3sVBg
- type: loss
value: 2.9468588829040527
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzhmODMwMDM2ZDFmNTMzZmFjMmJjMGQ0MmQzMDcyMmFiNmFjMzBhY2RjMTI1MDhiMjI4NTEyYWFlYTNlMzE0ZSIsInZlcnNpb24iOjF9.PHX9VAAgiUGOR8Uxuam4otU65hIzW6hBapaf1KY8o1FDfaoHWAKbSnpjZ3nXKYYeVV6LyGRny_7RdRbbbM8LAA
- type: gen_len
value: 298.2593
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGYzMDAzNTQyMzgxM2RhNjY3MDIyNGEyMWZjYmYyYmJlNWM3MTFkYzRjMDhkMmZhNDZiN2FhYTY3MGI2NDcxNyIsInZlcnNpb24iOjF9.ax3H6LohHUodVGhSMUWMZZZ-bCTXHEaGpK4jXuOdZkGsewYrX8fO1oRA0uDjACM-eceKFfVnMveHoU9EdMaeCA
---
# long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP14
This model is a fine-tuned version of [pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13](https://huggingface.co/pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13) on the kmfoda/booksum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 2
### Framework versions
- Transformers 4.22.0
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP17
|
pszemraj
| 2023-12-20T20:26:03Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"summarization",
"summary",
"booksum",
"long-document",
"long-form",
"dataset:kmfoda/booksum",
"base_model:pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP16",
"base_model:finetune:pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP16",
"license:apache-2.0",
"license:bsd-3-clause",
"model-index",
"autotrain_compatible",
"region:us"
] |
summarization
| 2022-09-26T23:21:47Z |
---
license:
- apache-2.0
- bsd-3-clause
tags:
- summarization
- summary
- booksum
- long-document
- long-form
datasets:
- kmfoda/booksum
inference: false
base_model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP16
model-index:
- name: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP17
results:
- task:
type: summarization
name: Summarization
dataset:
name: launch/gov_report
type: launch/gov_report
config: plain_text
split: test
metrics:
- type: rouge
value: 36.8427
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODY2YTBkOTUxZjRlOWYwNWI1OWM2ZDMwOTFjMGIzNjc5YzgwMWE2ZDJiNzY5YTZlZTZiZmNlYzNiYjc5NWZiMiIsInZlcnNpb24iOjF9.Jf41H9W-V6vbLXloL3XUtvKG2Uieoeijzean8Ns4AKRgX6OMeAaWpqoOG4Umpb1JsjtXvbSYdqwTlQVm0IAABQ
- type: rouge
value: 8.4234
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTY2MzQ1OTM1ZDM1OGU0OTliZGQ0Y2QzZDExZWQ2NzAwNzlhZThjYjc3ZmU1NDZiNTZjZmZiMTA1NjlhNGU1YyIsInZlcnNpb24iOjF9.etPfWSu1mxR5RN-9rq_F5FFi0IXPe81yGZWbpb6yDzZjAoiSTq4RCmaEUlt8JFXkoLJS3pP9JgjSr7Cg4dl2CQ
- type: rouge
value: 17.774
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjRkYTcyMzJiNzY5NWMwYzg2YmFkYTZiMzgxMzJjZDcwOTg3MWZmNTk3OTYzNzFkOGEyNTg4NmU0MjJlZDRlOCIsInZlcnNpb24iOjF9.Cd4LtEHKbEp-n42rDJb7KFqNlCUBKgCTz8sTNgkZVYJqY-rV5JGZtDz5mawNSbJTMn7rNnBNmaU4V99MGQyiBw
- type: rouge
value: 33.2901
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTI4M2RhNzQ5OWM4ZmExZjU1NDU0MDliZTM1YTgxODczMTFjNWFiNmJlNjc1YzEyY2FjZTJiNmFiZmNjNTc2MyIsInZlcnNpb24iOjF9.b52vREVLI3DgfDkku8lzi2KDWLiN9TvNENCjFAKmrifMDxpsjTPGn6qf0csvU6_kgjWkHKFO53VkLr-XFazPBg
- type: loss
value: 3.765686511993408
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTJjNzU1OWEwOWZiYzM2Zjk0ZjZhYmRmYjczMTJlZGFjOTNjZTY4Mjg3NTRiMTAzN2NlYTEyNjU2YWY5M2E5NiIsInZlcnNpb24iOjF9.Q4FFH8cbGLzaZWJUrSKeZl1os7h9S12v8a__oIoeeWL-c9GXVyNdtb5q-eb7r-4G5i9ytBc9NM6n90nuO353Aw
- type: gen_len
value: 213.8849
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTg3NGMwMTY0M2Y0M2JiOTFhNTQ2ODgxNzZjNTAwNjI4YmRhZTllNTU5ZjE5OGE2Y2EwZmI2YTQ3ZTQxNTFkNiIsInZlcnNpb24iOjF9.8yc25qbswrqJa56hlM1vvlD-Re7R1n3Q9_3U4c9OEzC9XIf8ir3zUQOrEZUb9vm5_H9a8QoiEXUcZG6Bq4fTAQ
- task:
type: summarization
name: Summarization
dataset:
name: kmfoda/booksum
type: kmfoda/booksum
config: kmfoda--booksum
split: test
metrics:
- type: rouge
value: 35.4324
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTkyOTU5M2ZiYzc4NzU0Nzg3NzI2MDE3MTVkZGRlYzVlNWFlYzJlYjA3ZGNhYzIyYmM1NzVkZWEzMTRhMGRhNCIsInZlcnNpb24iOjF9.TGS-ZF3MKg0cbgpPm2wz7Y8KarRGvBNyfaaDHFpUesYCR5pcz3a_ojRAGXOTIek-fcS--ZvADjEz8by9GYBOAQ
- type: rouge
value: 5.9586
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTIzZGRkMTc3MmUxOGRhMzI2NjAzNGYxYjY4YTM5N2QxNDJiZTJlMmRiNzY3NTFmZDg2NzAzMWI1ZTA5ZmY4YiIsInZlcnNpb24iOjF9.1fyZffIo-wDg85krXWGgc90SlFLIU_v7URS-14zNEHZSe4kmbcdGmW963WKAEI2v2oRXU7uQ3BsgDS3d30KzDQ
- type: rouge
value: 16.134
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODI4Mzc1YTY3ZDBmYTMxZTJkMmU3YWI4OGE3NGVkODk1NDQ5NThlZTM0NmM5ZGEwODFjZWI5ODQ5YzAxYmMzOCIsInZlcnNpb24iOjF9.KzQLzOXFjJv_tRzKPnkp2AA_8u_aZtI2GQQeavB3L4ksmX-aOnlVu9-fXktfOCiXmmJCbyZfS3uicdyLgqyhBw
- type: rouge
value: 32.4141
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTFhNDQ1Y2YyMThjMzBmY2EzM2MxNzNkYTE1MDYxNzAwOWQxNDdlNjljZWE4ZWRiZmUxODkyOGM3ZDZiN2I3NyIsInZlcnNpb24iOjF9.YYxCtJlNax15r4oY_IikY1MmaU05WCD_JtTMKt5Jdb9Tco2KOPQ9z6Vc6AlGEJNaXVNRm-ROS7CKCDkC55B_Bg
- type: loss
value: 3.050943374633789
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDllNjgzMWZjNzNhZmFlZWQ4OTY3MTVjY2FkOGEzYjFkOGNhNzM3MjhhNTUyNWI5ODhhNTk2MDhlODNhNGMxOCIsInZlcnNpb24iOjF9.5S2y4SbzKWu6BHlnyUH2R9jwO780INnzqQbdKHXizkJKvX8g9qpuYB0Iu41e1aWqmePdY0JbVUqhG3Xfo2otBA
- type: gen_len
value: 279.8735
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmRhZTY2MDMwMTk2ZGFjNTJhN2Q4NmNjYjRjNjgzMzljNzBiYjEwNTgzZGNhYmY5MGNhYTY2MzE0OTlhNGNhZSIsInZlcnNpb24iOjF9.epjIxFmPlfDHJc--eJIo8AgnkjQBDLokICjxVqwyHiE6T0Hlj8D69RhOplEIDwMQyXC5usfkF0zW7ib8JuhyCg
- task:
type: summarization
name: Summarization
dataset:
name: billsum
type: billsum
config: default
split: test
metrics:
- type: rouge
value: 38.066
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGJlYjZhMDIzZTAyNzU3MGFhYmIwZTBjNTViZTQ5MjFjODcyNTUzNDg5MWVlNzMxZTQ0NjA5ZjJlYWYxZDk4ZiIsInZlcnNpb24iOjF9.g-Ppx-hZPJBQM160VSDZWLFt0WEv5WbBiOpwQlbFnQ12QSezZiu-NR2wsaZeNLIVWaeEDVTCVpVcHf0-YymYAA
- type: rouge
value: 12.5289
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjlkNjY3OGE4YWIxMjEzZmQyMDg5ZjMxNjhhMzBlMDQ1NDgwZGQ0NWUyYmFhZTY0Mzc4MWQ0NTJjZmE4MmZiOCIsInZlcnNpb24iOjF9.X-rXBFAXTJXVmihkVHdqdpm6QCHbuI4Ligd2SsmvVcpMux6ep2EoBKd4xuTW4WCr6Qjsa7tZ7kJM-1pu9kKSDw
- type: rouge
value: 22.3393
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2E1NGM1YmMyYTc5NTkxNzU3ZWM4OGRmNDBhNDdlZjYwZjBjNWNkNmJkMjkyMDkzNDBlMGIyZDg4ZjBlYTQ3OCIsInZlcnNpb24iOjF9.vZi95CQMrkdETfhQjjgoO2WkpM3Fr4NZCTX7S9q3TnsC9J9KELfcNNXq7rtbWgQndUK74AvBt7G6nG7Qj13nBw
- type: rouge
value: 31.9802
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzU3ZTIzZDhiYjc1ODk2ODg4NTI2MDFhZWFhMDRmMTg2OTg0MzkyMjQ0NDkyODI0ZTE1MmM2MzNiODQ2Y2EzZiIsInZlcnNpb24iOjF9.k48PRPLAGKPT-ILO5HbPciwFG9vdR6_ICvUXmOnJI4mz5dIoBLvR0aTdWCU070jyPveDwXisIvE9scK9jWsUCA
- type: loss
value: 3.0360958576202393
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTgzOGRlNmMwZjYyMzNkNjcwNDM4MTYyZjgzODhjYTdhY2JiNWY4ZjMzNWJhZjc1YjNiYjViZDk2ODMzMmI5ZiIsInZlcnNpb24iOjF9.dH1fJs84sTWXqrmdsCMuc6zexedn0uUWd9gmVV2JKzFzpPbTxzIJSNez7jaGz_sgSK8q-AeclWFrBAgPDnM6Bg
- type: gen_len
value: 161.4671
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjc5NGM4OWU5ZDM2YTZhZmM2OTgwY2ZiODRmYzE2MWRlMTVhZDBjZTQ3ODJkNjU4NzdkZGZlZDk1YjdkYmE0OCIsInZlcnNpb24iOjF9.OSzFnK9k7IT0cv2qXSVzgjTVLkxkqYnUI9OQqPcoEjBK8nqY0OdMQ8BWq6CN6rt6VmVk111B0TJJCTEfseiHBg
---
# long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP17
This model is a fine-tuned version of [pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP16](https://huggingface.co/pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP16) on the kmfoda/booksum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 64
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 3
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
postbot/emailgen-pythia-410m-deduped
|
postbot
| 2023-12-20T20:25:50Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"en",
"dataset:postbot/multi-emails-hq",
"base_model:EleutherAI/pythia-410m-deduped",
"base_model:quantized:EleutherAI/pythia-410m-deduped",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-02-15T09:30:22Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- postbot/multi-emails-hq
metrics:
- accuracy
widget:
- text: 'Good Morning Professor Beans,
Hope you are doing well. I just wanted to reach out and ask if differential calculus
will be on the exam'
example_title: email to prof
- text: 'Hey <NAME>,
Thank you for signing up for my weekly newsletter. Before we get started, you''ll
have to confirm your email address.'
example_title: newsletter
- text: 'Hi <NAME>,
I hope this email finds you well. I wanted to reach out and ask about office hours'
example_title: office hours
- text: 'Greetings <NAME>,
I hope you had a splendid evening at the Company sausage eating festival. I am
reaching out because'
example_title: festival
- text: 'Good Morning Harold,
I was wondering when the next'
example_title: event
- text: URGENT - I need the TPS reports
example_title: URGENT
- text: 'Hi Archibald,
I hope this email finds you extremely well.'
example_title: emails that find you
- text: 'Hello there.
I just wanted to reach out and check in to'
example_title: checking in
- text: 'Hello <NAME>,
I hope this email finds you well. I wanted to reach out and see if you''ve enjoyed
your time with us'
example_title: work well
- text: 'Hi <NAME>,
I hope this email finds you well. I wanted to reach out and see if we could catch
up'
example_title: catch up
- text: I'm <NAME> and I just moved into the area and wanted to reach out and get
some details on where I could get groceries and
example_title: grocery
pipeline_tag: text-generation
base_model: EleutherAI/pythia-410m-deduped
model-index:
- name: multi-emails-hq-pythia-410m-deduped-r1
results: []
---
# emailgen-pythia-410m-deduped
[](https://colab.research.google.com/gist/pszemraj/94b0e6b95437896f800a65ae2e5f9ab4/emailgen-pythia-410m-deduped.ipynb
)
This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on email data.
It achieves the following results on the evaluation set:
- Loss: 2.1018
- Accuracy: 0.6157
- perplexity: 8.181
## Model description
- fine-tuned on dataset of emails for 4 epochs
- intended use: "text completion" of partially written emails
## Usage example
```python
from transformers import pipeline
model_tag = "postbot/emailgen-pythia-410m-deduped"
generator = pipeline(
"text-generation",
model=model_tag,
)
prompt = """
Hello,
Following up on the bubblegum shipment."""
result = generator(
prompt,
) # generate
print(result[0]["generated_text"])
```
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_postbot__emailgen-pythia-410m-deduped)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 26.65 |
| ARC (25-shot) | 27.9 |
| HellaSwag (10-shot) | 40.04 |
| MMLU (5-shot) | 27.35 |
| TruthfulQA (0-shot) | 38.2 |
| Winogrande (5-shot) | 52.09 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 0.99 |
|
iamhassaan/controlnet_sd_apparel_NVIDIA_A10G
|
iamhassaan
| 2023-12-20T20:16:51Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-12-20T18:11:17Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-iamhassaan/controlnet_sd_apparel_NVIDIA_A10G
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
You can find some example images below.
prompt: new knitwear design based on sketch, 100% cotton, red colour, make it photorealistic

|
winddude/mamba_financial_headline_sentiment
|
winddude
| 2023-12-20T20:07:20Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"finance",
"text-classification",
"en",
"dataset:winddude/finacial_pharsebank_66agree_split",
"dataset:financial_phrasebank",
"base_model:state-spaces/mamba-2.8b",
"base_model:finetune:state-spaces/mamba-2.8b",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-20T19:29:29Z |
---
license: apache-2.0
datasets:
- winddude/finacial_pharsebank_66agree_split
- financial_phrasebank
language:
- en
base_model:
- state-spaces/mamba-2.8b
metrics:
- accuracy
- f1
- recall
- precission
model-index:
- name: financial-sentiment-analysis
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
args: sentences_66agree
metrics:
- name: Accuracy
type: accuracy
value: 0.82
- name: Percision
type: percision
value: 0.82
- name: recall
type: recall
value: 0.82
- name: F1
type: f1
value: 0.82
pipeline_tag: text-classification
tags:
- finance
---
# Mamba Financial Headline Sentiment Classifier
A sentment classifier for finacial headlines using mamba 2.8b as the base model.
Text is classified into 1 of 3 labels; positive, neutral, or negative.
## Prompt Format:
```
prompt = f"""Classify the setiment of the following news headlines as either `positive`, `neutral`, or `negative`.\n
Headline: {headline}\n
Classification:"""
```
where `headline` is the text you want to be classified.
|
DaRkSpyro/CynderTheDragon
|
DaRkSpyro
| 2023-12-20T20:04:47Z | 0 | 0 | null |
[
"music",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-12-20T19:56:31Z |
---
license: apache-2.0
language:
- en
tags:
- music
metrics:
- accuracy
---
|
LoneStriker/CatPPT-8.0bpw-h8-exl2
|
LoneStriker
| 2023-12-20T20:02:18Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"generated_from_trainer",
"text-generation",
"en",
"dataset:HuggingFaceH4/no_robots",
"base_model:rishiraj/CatPPT-base",
"base_model:adapter:rishiraj/CatPPT-base",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-12-20T19:59:12Z |
---
license: apache-2.0
library_name: peft
base_model: rishiraj/CatPPT-base
datasets:
- HuggingFaceH4/no_robots
language:
- en
widget:
- text: |
<|system|>
You are a friendly chatbot who always responds in the style of a pirate</s>
<|user|>
How many helicopters can a human eat in one sitting?</s>
<|assistant|>
output:
text: >-
Aye, me hearties! 'Tis not likely a human can eat a helicopter in any sittin', let alone one! They be too big and made of metal, and not fit for consumption. But if ye be referrin' to helicopter snacks, like nuts and trail mix, then a human might be able to munch a goodly amount in one sittin'. Arr!
tags:
- generated_from_trainer
pipeline_tag: text-generation
model-index:
- name: CatPPT
results: []
---
# 😼 CatPPT
Introducing "CatPPT" - the purrfect alternative to that other big cat in town, known for keeping all the secrets to itself! Our feline friend here is created through merging openchat and neuralchat models using Gradient SLERP method (resulting in [rishiraj/CatPPT-base](https://huggingface.co/rishiraj/CatPPT-base)) and then finetuned on no_robots dataset for chat.
This is the top-performing 7B model on the leaderboard, that's free from any whiff of evaluation data contamination.

## Model date
rishiraj/CatPPT was trained between 15th and 17th December, 2023.
## Evaluation
It achieves the following results on the [Open_LLM_Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). At the time of release, CatPPT is the highest ranked 7B chat model on the leaderboard, that's **free from evaluation data contamination**.
|Model |Average|ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|
|------------------------------------|-------|-----|---------|-----|----------|----------|-----|
|**rishiraj/CatPPT** |**72.32** |**68.09**|**86.69** |**65.16**|**61.55** |**81.61** |**70.81**|
|Intel/neural-chat-7b-v3-3 |69.83 |66.89|85.26 |63.07|63.01 |79.64 |61.11|
|openchat/openchat-3.5-1210 |68.89 |64.93|84.92 |64.62|52.15 |80.74 |65.96|
|meta-math/MetaMath-Mistral-7B |65.78 |60.67|82.58 |61.95|44.89 |75.77 |68.84|
|Deci/DeciLM-7B-instruct |63.19 |61.01|82.37 |60.24|49.75 |79.72 |46.02|
|mistralai/Mistral-7B-Instruct-v0.2 |65.71 |63.14|84.88 |60.78|68.26 |77.19 |40.03|
|mistralai/Mixtral-8x7B-Instruct-v0.1|72.62 |70.22|87.63 |71.16|64.58 |81.37 |60.73|
|meta-llama/Llama-2-70b-hf |67.87 |67.32|87.33 |69.83|44.92 |83.74 |54.06|
|tiiuae/falcon-180B |67.85 |69.45|88.86 |70.5 |45.47 |86.9 |45.94|
## Inference procedure
Here's how you can run the model using the pipeline() function from 🤗 Transformers:
```
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="rishiraj/CatPPT", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate"
},
{
"role": "user",
"content": "How many helicopters can a human eat in one sitting?"
}
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 128
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9947 | 0.16 | 3 | 2.0093 |
### Framework versions
- Transformers 4.36.1
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
- PEFT 0.6.1
## Citation Information
```
@misc{rishiraj2023catppt,
author = {Rishiraj Acharya},
title = {CatPPT},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/rishiraj/CatPPT}}
}
```
|
LoneStriker/CatPPT-6.0bpw-h6-exl2
|
LoneStriker
| 2023-12-20T19:55:04Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"generated_from_trainer",
"text-generation",
"en",
"dataset:HuggingFaceH4/no_robots",
"base_model:rishiraj/CatPPT-base",
"base_model:adapter:rishiraj/CatPPT-base",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-12-20T19:52:40Z |
---
license: apache-2.0
library_name: peft
base_model: rishiraj/CatPPT-base
datasets:
- HuggingFaceH4/no_robots
language:
- en
widget:
- text: |
<|system|>
You are a friendly chatbot who always responds in the style of a pirate</s>
<|user|>
How many helicopters can a human eat in one sitting?</s>
<|assistant|>
output:
text: >-
Aye, me hearties! 'Tis not likely a human can eat a helicopter in any sittin', let alone one! They be too big and made of metal, and not fit for consumption. But if ye be referrin' to helicopter snacks, like nuts and trail mix, then a human might be able to munch a goodly amount in one sittin'. Arr!
tags:
- generated_from_trainer
pipeline_tag: text-generation
model-index:
- name: CatPPT
results: []
---
# 😼 CatPPT
Introducing "CatPPT" - the purrfect alternative to that other big cat in town, known for keeping all the secrets to itself! Our feline friend here is created through merging openchat and neuralchat models using Gradient SLERP method (resulting in [rishiraj/CatPPT-base](https://huggingface.co/rishiraj/CatPPT-base)) and then finetuned on no_robots dataset for chat.
This is the top-performing 7B model on the leaderboard, that's free from any whiff of evaluation data contamination.

## Model date
rishiraj/CatPPT was trained between 15th and 17th December, 2023.
## Evaluation
It achieves the following results on the [Open_LLM_Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). At the time of release, CatPPT is the highest ranked 7B chat model on the leaderboard, that's **free from evaluation data contamination**.
|Model |Average|ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|
|------------------------------------|-------|-----|---------|-----|----------|----------|-----|
|**rishiraj/CatPPT** |**72.32** |**68.09**|**86.69** |**65.16**|**61.55** |**81.61** |**70.81**|
|Intel/neural-chat-7b-v3-3 |69.83 |66.89|85.26 |63.07|63.01 |79.64 |61.11|
|openchat/openchat-3.5-1210 |68.89 |64.93|84.92 |64.62|52.15 |80.74 |65.96|
|meta-math/MetaMath-Mistral-7B |65.78 |60.67|82.58 |61.95|44.89 |75.77 |68.84|
|Deci/DeciLM-7B-instruct |63.19 |61.01|82.37 |60.24|49.75 |79.72 |46.02|
|mistralai/Mistral-7B-Instruct-v0.2 |65.71 |63.14|84.88 |60.78|68.26 |77.19 |40.03|
|mistralai/Mixtral-8x7B-Instruct-v0.1|72.62 |70.22|87.63 |71.16|64.58 |81.37 |60.73|
|meta-llama/Llama-2-70b-hf |67.87 |67.32|87.33 |69.83|44.92 |83.74 |54.06|
|tiiuae/falcon-180B |67.85 |69.45|88.86 |70.5 |45.47 |86.9 |45.94|
## Inference procedure
Here's how you can run the model using the pipeline() function from 🤗 Transformers:
```
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="rishiraj/CatPPT", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate"
},
{
"role": "user",
"content": "How many helicopters can a human eat in one sitting?"
}
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 128
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9947 | 0.16 | 3 | 2.0093 |
### Framework versions
- Transformers 4.36.1
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
- PEFT 0.6.1
## Citation Information
```
@misc{rishiraj2023catppt,
author = {Rishiraj Acharya},
title = {CatPPT},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/rishiraj/CatPPT}}
}
```
|
badokorach/distilbert-base-uncased-finetuned-squad
|
badokorach
| 2023-12-20T19:51:54Z | 44 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-03-14T16:55:16Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 47 | 3.7022 |
| No log | 2.0 | 94 | 3.2578 |
| No log | 3.0 | 141 | 3.1677 |
| No log | 4.0 | 188 | 3.1411 |
| No log | 5.0 | 235 | 3.2288 |
| No log | 6.0 | 282 | 3.2640 |
| No log | 7.0 | 329 | 3.3689 |
| No log | 8.0 | 376 | 3.4776 |
| No log | 9.0 | 423 | 3.5919 |
| No log | 10.0 | 470 | 3.6319 |
| 2.2819 | 11.0 | 517 | 3.8199 |
| 2.2819 | 12.0 | 564 | 3.9339 |
| 2.2819 | 13.0 | 611 | 3.9348 |
| 2.2819 | 14.0 | 658 | 3.9293 |
| 2.2819 | 15.0 | 705 | 4.0312 |
| 2.2819 | 16.0 | 752 | 4.1651 |
| 2.2819 | 17.0 | 799 | 4.0788 |
| 2.2819 | 18.0 | 846 | 4.1112 |
| 2.2819 | 19.0 | 893 | 4.1513 |
| 2.2819 | 20.0 | 940 | 4.1676 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LoneStriker/CatPPT-4.0bpw-h6-exl2
|
LoneStriker
| 2023-12-20T19:41:54Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"generated_from_trainer",
"text-generation",
"en",
"dataset:HuggingFaceH4/no_robots",
"base_model:rishiraj/CatPPT-base",
"base_model:adapter:rishiraj/CatPPT-base",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-12-20T19:40:12Z |
---
license: apache-2.0
library_name: peft
base_model: rishiraj/CatPPT-base
datasets:
- HuggingFaceH4/no_robots
language:
- en
widget:
- text: |
<|system|>
You are a friendly chatbot who always responds in the style of a pirate</s>
<|user|>
How many helicopters can a human eat in one sitting?</s>
<|assistant|>
output:
text: >-
Aye, me hearties! 'Tis not likely a human can eat a helicopter in any sittin', let alone one! They be too big and made of metal, and not fit for consumption. But if ye be referrin' to helicopter snacks, like nuts and trail mix, then a human might be able to munch a goodly amount in one sittin'. Arr!
tags:
- generated_from_trainer
pipeline_tag: text-generation
model-index:
- name: CatPPT
results: []
---
# 😼 CatPPT
Introducing "CatPPT" - the purrfect alternative to that other big cat in town, known for keeping all the secrets to itself! Our feline friend here is created through merging openchat and neuralchat models using Gradient SLERP method (resulting in [rishiraj/CatPPT-base](https://huggingface.co/rishiraj/CatPPT-base)) and then finetuned on no_robots dataset for chat.
This is the top-performing 7B model on the leaderboard, that's free from any whiff of evaluation data contamination.

## Model date
rishiraj/CatPPT was trained between 15th and 17th December, 2023.
## Evaluation
It achieves the following results on the [Open_LLM_Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). At the time of release, CatPPT is the highest ranked 7B chat model on the leaderboard, that's **free from evaluation data contamination**.
|Model |Average|ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|
|------------------------------------|-------|-----|---------|-----|----------|----------|-----|
|**rishiraj/CatPPT** |**72.32** |**68.09**|**86.69** |**65.16**|**61.55** |**81.61** |**70.81**|
|Intel/neural-chat-7b-v3-3 |69.83 |66.89|85.26 |63.07|63.01 |79.64 |61.11|
|openchat/openchat-3.5-1210 |68.89 |64.93|84.92 |64.62|52.15 |80.74 |65.96|
|meta-math/MetaMath-Mistral-7B |65.78 |60.67|82.58 |61.95|44.89 |75.77 |68.84|
|Deci/DeciLM-7B-instruct |63.19 |61.01|82.37 |60.24|49.75 |79.72 |46.02|
|mistralai/Mistral-7B-Instruct-v0.2 |65.71 |63.14|84.88 |60.78|68.26 |77.19 |40.03|
|mistralai/Mixtral-8x7B-Instruct-v0.1|72.62 |70.22|87.63 |71.16|64.58 |81.37 |60.73|
|meta-llama/Llama-2-70b-hf |67.87 |67.32|87.33 |69.83|44.92 |83.74 |54.06|
|tiiuae/falcon-180B |67.85 |69.45|88.86 |70.5 |45.47 |86.9 |45.94|
## Inference procedure
Here's how you can run the model using the pipeline() function from 🤗 Transformers:
```
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="rishiraj/CatPPT", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate"
},
{
"role": "user",
"content": "How many helicopters can a human eat in one sitting?"
}
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 128
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9947 | 0.16 | 3 | 2.0093 |
### Framework versions
- Transformers 4.36.1
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
- PEFT 0.6.1
## Citation Information
```
@misc{rishiraj2023catppt,
author = {Rishiraj Acharya},
title = {CatPPT},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/rishiraj/CatPPT}}
}
```
|
ntc-ai/SDXL-LoRA-slider.overenthusiastic
|
ntc-ai
| 2023-12-20T19:38:18Z | 53 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2023-12-20T19:38:15Z |
---
language:
- en
thumbnail: "images/evaluate/overenthusiastic...unenthusiastic/overenthusiastic_17_3.0.png"
widget:
- text: overenthusiastic
output:
url: images/overenthusiastic_17_3.0.png
- text: overenthusiastic
output:
url: images/overenthusiastic_19_3.0.png
- text: overenthusiastic
output:
url: images/overenthusiastic_20_3.0.png
- text: overenthusiastic
output:
url: images/overenthusiastic_21_3.0.png
- text: overenthusiastic
output:
url: images/overenthusiastic_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "overenthusiastic"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - overenthusiastic (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/overenthusiastic_17_-3.0.png" width=256 height=256 /> | <img src="images/overenthusiastic_17_0.0.png" width=256 height=256 /> | <img src="images/overenthusiastic_17_3.0.png" width=256 height=256 /> |
| <img src="images/overenthusiastic_19_-3.0.png" width=256 height=256 /> | <img src="images/overenthusiastic_19_0.0.png" width=256 height=256 /> | <img src="images/overenthusiastic_19_3.0.png" width=256 height=256 /> |
| <img src="images/overenthusiastic_20_-3.0.png" width=256 height=256 /> | <img src="images/overenthusiastic_20_0.0.png" width=256 height=256 /> | <img src="images/overenthusiastic_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
overenthusiastic
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.overenthusiastic', weight_name='overenthusiastic.safetensors', adapter_name="overenthusiastic")
# Activate the LoRA
pipe.set_adapters(["overenthusiastic"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, overenthusiastic"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 510+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
LoneStriker/CatPPT-3.0bpw-h6-exl2
|
LoneStriker
| 2023-12-20T19:35:22Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"generated_from_trainer",
"text-generation",
"en",
"dataset:HuggingFaceH4/no_robots",
"base_model:rishiraj/CatPPT-base",
"base_model:adapter:rishiraj/CatPPT-base",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-12-20T19:33:59Z |
---
license: apache-2.0
library_name: peft
base_model: rishiraj/CatPPT-base
datasets:
- HuggingFaceH4/no_robots
language:
- en
widget:
- text: |
<|system|>
You are a friendly chatbot who always responds in the style of a pirate</s>
<|user|>
How many helicopters can a human eat in one sitting?</s>
<|assistant|>
output:
text: >-
Aye, me hearties! 'Tis not likely a human can eat a helicopter in any sittin', let alone one! They be too big and made of metal, and not fit for consumption. But if ye be referrin' to helicopter snacks, like nuts and trail mix, then a human might be able to munch a goodly amount in one sittin'. Arr!
tags:
- generated_from_trainer
pipeline_tag: text-generation
model-index:
- name: CatPPT
results: []
---
# 😼 CatPPT
Introducing "CatPPT" - the purrfect alternative to that other big cat in town, known for keeping all the secrets to itself! Our feline friend here is created through merging openchat and neuralchat models using Gradient SLERP method (resulting in [rishiraj/CatPPT-base](https://huggingface.co/rishiraj/CatPPT-base)) and then finetuned on no_robots dataset for chat.
This is the top-performing 7B model on the leaderboard, that's free from any whiff of evaluation data contamination.

## Model date
rishiraj/CatPPT was trained between 15th and 17th December, 2023.
## Evaluation
It achieves the following results on the [Open_LLM_Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). At the time of release, CatPPT is the highest ranked 7B chat model on the leaderboard, that's **free from evaluation data contamination**.
|Model |Average|ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|
|------------------------------------|-------|-----|---------|-----|----------|----------|-----|
|**rishiraj/CatPPT** |**72.32** |**68.09**|**86.69** |**65.16**|**61.55** |**81.61** |**70.81**|
|Intel/neural-chat-7b-v3-3 |69.83 |66.89|85.26 |63.07|63.01 |79.64 |61.11|
|openchat/openchat-3.5-1210 |68.89 |64.93|84.92 |64.62|52.15 |80.74 |65.96|
|meta-math/MetaMath-Mistral-7B |65.78 |60.67|82.58 |61.95|44.89 |75.77 |68.84|
|Deci/DeciLM-7B-instruct |63.19 |61.01|82.37 |60.24|49.75 |79.72 |46.02|
|mistralai/Mistral-7B-Instruct-v0.2 |65.71 |63.14|84.88 |60.78|68.26 |77.19 |40.03|
|mistralai/Mixtral-8x7B-Instruct-v0.1|72.62 |70.22|87.63 |71.16|64.58 |81.37 |60.73|
|meta-llama/Llama-2-70b-hf |67.87 |67.32|87.33 |69.83|44.92 |83.74 |54.06|
|tiiuae/falcon-180B |67.85 |69.45|88.86 |70.5 |45.47 |86.9 |45.94|
## Inference procedure
Here's how you can run the model using the pipeline() function from 🤗 Transformers:
```
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="rishiraj/CatPPT", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate"
},
{
"role": "user",
"content": "How many helicopters can a human eat in one sitting?"
}
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 128
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9947 | 0.16 | 3 | 2.0093 |
### Framework versions
- Transformers 4.36.1
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
- PEFT 0.6.1
## Citation Information
```
@misc{rishiraj2023catppt,
author = {Rishiraj Acharya},
title = {CatPPT},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/rishiraj/CatPPT}}
}
```
|
Remeris/BART-CNN-Convosumm
|
Remeris
| 2023-12-20T19:27:52Z | 19 | 2 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"summarization",
"en",
"dataset:cnn_dailymail",
"dataset:Convosumm",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:cc",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-12-10T16:11:51Z |
---
license: cc
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
- Convosumm
widget:
- text: >
Can we say that among the Pythagoreans the “body” of the concept was number? What do you mean by "concept body"? shell. What then is hidden behind this shell? Definition of a concept) what definition of a concept is ultimately hidden behind the body in the form of a number? All those that the Pythagoreans indicated. I want to say that numbers were their very concept. They thought in numbers as in concepts. Shape maybe?) you can say yes, but it will need to be developed on a mug. The definitions of thought are subject to numbers. On the one hand, numbers are pure abstraction, which gives initial freedom of thought for the derivation of abstract, embryonic definitions, but then for the derivation, description of reality, more specific concepts, the abstractness of numbers, on the contrary, limits, “leads into the darkness.” One is the object, “in itself”;'
model-index:
- name: BART-CNN-Convosumm
results:
- task:
name: Abstractive Dialogue Summarization
type: abstractive-text-summarization
dataset:
name: Reddit arg-filtered part of Convosumm
type: Convosumm
metrics:
- name: Validation ROGUE-1
type: rogue-1
value: 38.6252
- name: Validation ROGUE-L
type: rogue-l
value: 23.902
- name: Test ROGUE-1
type: rogue-1
value: 38.3642
- name: Test ROGUE-L
type: rogue-l
value: 23.7782
language:
- en
pipeline_tag: summarization
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BART-CNN-Convosumm
## Model description
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the arg-filtered reddit part of [Convosumm](https://github.com/Yale-LILY/ConvoSumm) dataset.
Model is trained for [multilanguage telegram-bot summarizer](https://github.com/akaRemeris/XLConvosumm-bot).
## Intended uses & limitations
Input expected: unstructured set of concatenated messages without nickname-message indexing.
## Training and evaluation data
More information needed
## Training procedure
Wandb logged [results](https://wandb.ai/remeris/BART-CNN-Convosumm/runs/68syxthd).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- lr_scheduler_warmup_steps: 1
- num_epochs: 7
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 6.207 | 1.0 | 10 | 4.2651 | 32.3341 | 7.812 | 20.0411 | 29.4849 | 77.38 |
| 4.0248 | 1.99 | 20 | 3.9903 | 36.0787 | 11.0447 | 21.3596 | 33.2903 | 130.58 |
| 3.5933 | 2.99 | 30 | 3.9020 | 34.2931 | 11.2036 | 20.7935 | 30.8361 | 140.02 |
| 3.3086 | 3.98 | 40 | 3.8712 | 38.4842 | 11.9947 | 23.4913 | 34.4347 | 85.78 |
| 3.112 | 4.98 | 50 | 3.8700 | 38.652 | 11.8315 | 23.5208 | 34.5998 | 76.2 |
| 2.9933 | 5.97 | 60 | 3.8809 | 38.66 | 12.3337 | 23.4394 | 35.1976 | 83.26 |
| 2.834 | 6.97 | 70 | 3.8797 | 38.6252 | 12.2556 | 23.902 | 34.6324 | 81.28 |
It achieves the following results on the evaluation set (50 data points):
- Loss: 3.8797
- Rouge1: 38.6252
- Rouge2: 12.2556
- Rougel: 23.902
- Rougelsum: 34.6324
- Gen Len: 81.28
It achieves the following results on the test set (250 data points):
- Loss: 3.8343
- Rouge1: 38.3642
- Rouge2: 12.2056
- Rougel: 23.7782
- Rougelsum: 34.3959
- Gen Len: 84.132
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
livingbox/dining-room-19-12-v2
|
livingbox
| 2023-12-20T19:19:09Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-20T19:13:40Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### dining-room-19/12-v2 Dreambooth model trained by livingbox with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
mlx-community/mistral-7B-v0.1
|
mlx-community
| 2023-12-20T19:13:19Z | 37 | 11 |
mlx
|
[
"mlx",
"mistral",
"text-generation",
"arxiv:2310.06825",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-12-19T17:39:31Z |
---
pipeline_tag: text-generation
inference: false
tags:
- mistral
- mlx
license: apache-2.0
library_name: mlx
---
# Mistral-7B-v0.1
The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters.
Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested.
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
This repository contains the `mistral-7B-v0.1` weights in `npz` format suitable for use with Apple's MLX framework.
## Use with MLX
```bash
pip install mlx
pip install huggingface_hub hf_transfer
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples
# Download model
export HF_HUB_ENABLE_HF_TRANSFER=1
huggingface-cli download --local-dir-use-symlinks False --local-dir mistral-7B-v0.1 mlx-community/mistral-7B-v0.1
# Run example
python llms/mistral/mistral.py --prompt "My name is"
```
Please, refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-v0.1) for more details on Mistral-7B-v0.1.
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_SystemError0.4_Seed104
|
behzadnet
| 2023-12-20T19:11:46Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-20T19:11:43Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
mlx-community/Llama-2-7b-chat-mlx
|
mlx-community
| 2023-12-20T19:10:32Z | 102 | 84 |
mlx
|
[
"mlx",
"llama",
"facebook",
"meta",
"llama-2",
"text-generation",
"license:llama2",
"region:us"
] |
text-generation
| 2023-12-06T15:21:20Z |
---
pipeline_tag: text-generation
library_name: mlx
inference: false
tags:
- facebook
- meta
- llama
- llama-2
- mlx
license: llama2
---
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, in `npz` format suitable for use in Apple's MLX framework.
Weights have been converted to `float16` from the original `bfloat16` type, because `numpy` is not compatible with `bfloat16` out of the box.
How to use with [MLX](https://github.com/ml-explore/mlx).
```bash
# Install mlx, mlx-examples, huggingface-cli
pip install mlx
pip install huggingface_hub hf_transfer
git clone https://github.com/ml-explore/mlx-examples.git
# Download model
export HF_HUB_ENABLE_HF_TRANSFER=1
huggingface-cli download --local-dir Llama-2-7b-chat-mlx mlx-llama/Llama-2-7b-chat-mlx
# Run example
python mlx-examples/llama/llama.py --prompt "My name is " Llama-2-7b-chat-mlx/ Llama-2-7b-chat-mlx/tokenizer.model
```
Please, refer to the [original model card](https://huggingface.co/meta-llama/Llama-2-7b-chat) for details on Llama 2.
|
mlx-community/Llama-2-7b-mlx
|
mlx-community
| 2023-12-20T19:10:11Z | 115 | 11 |
mlx
|
[
"mlx",
"llama",
"facebook",
"meta",
"llama-2",
"text-generation",
"license:llama2",
"region:us"
] |
text-generation
| 2023-12-06T15:58:44Z |
---
pipeline_tag: text-generation
library_name: mlx
inference: false
tags:
- facebook
- meta
- llama
- llama-2
- mlx
license: llama2
---
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, in `npz` format suitable for use in Apple's MLX framework.
Weights have been converted to `float16` from the original `bfloat16` type, because `numpy` is not compatible with `bfloat16` out of the box.
```bash
# Install mlx, mlx-examples, huggingface-cli
pip install mlx
pip install huggingface_hub hf_transfer
git clone https://github.com/ml-explore/mlx-examples.git
# Download model
export HF_HUB_ENABLE_HF_TRANSFER=1
huggingface-cli download --local-dir Llama-2-7b-mlx mlx-llama/Llama-2-7b-mlx
# Run example
python mlx-examples/llama/llama.py --prompt "My name is" Llama-2-7b-mlx/ Llama-2-7b-mlx/tokenizer.model
```
Please, refer to the [original model card](https://huggingface.co/meta-llama/Llama-2-7b/tree/main) for details on Llama 2.
|
mlx-community/CodeLlama-7b-mlx
|
mlx-community
| 2023-12-20T19:09:24Z | 8 | 10 |
mlx
|
[
"mlx",
"llama",
"facebook",
"meta",
"llama-2",
"codellama",
"text-generation",
"license:llama2",
"region:us"
] |
text-generation
| 2023-12-06T17:28:40Z |
---
pipeline_tag: text-generation
library_name: mlx
inference: false
tags:
- facebook
- meta
- llama
- llama-2
- codellama
- mlx
license: llama2
---
# **CodeLlama**
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This model is designed for general code synthesis and understanding. This is the repository for the 7B base model, in `npz` format suitable for use in Apple's MLX framework.
Weights have been converted to `float16` from the original `bfloat16` type, because `numpy` is not compatible with `bfloat16` out of the box.
How to use with [MLX](https://github.com/ml-explore/mlx).
```bash
# Install mlx, mlx-examples, huggingface-cli
pip install mlx
pip install huggingface_hub hf_transfer
git clone https://github.com/ml-explore/mlx-examples.git
# Download model
export HF_HUB_ENABLE_HF_TRANSFER=1
huggingface-cli download --local-dir CodeLlama-7b-mlx mlx-llama/CodeLlama-7b-mlx
# Run example
python mlx-examples/llama/llama.py --prompt "int main(char argc, char **argv) {" CodeLlama-7b-mlx/ CodeLlama-7b-mlx/tokenizer.model
```
Please, refer to the [original model card](https://github.com/facebookresearch/codellama/blob/main/MODEL_CARD.md) for details on CodeLlama.
|
Matvey67/distilbert-base-uncased-finetuned-emotion
|
Matvey67
| 2023-12-20T19:08:35Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-20T17:30:40Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9260809028404356
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2123
- Accuracy: 0.926
- F1: 0.9261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7777 | 1.0 | 250 | 0.2958 | 0.9165 | 0.9163 |
| 0.2393 | 2.0 | 500 | 0.2123 | 0.926 | 0.9261 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Tamakoma/CowboyHatEmoji
|
Tamakoma
| 2023-12-20T19:07:52Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"convnextv2",
"image-classification",
"vision",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/convnextv2-large-22k-384",
"base_model:finetune:facebook/convnextv2-large-22k-384",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-05-24T14:42:12Z |
---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
base_model: facebook/convnextv2-large-22k-384
model-index:
- name: outputs
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- type: accuracy
value: 0.7777777777777778
name: Accuracy
---
# Cowboy Hat emoji 🤠 (Western)
This model is a fine-tuned version of [facebook/convnextv2-large-22k-384](https://huggingface.co/facebook/convnextv2-large-22k-384) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4250
- Accuracy: 0.7778
## Model description
When you want to know if an art is 🤠 or not 🤠.
- Current iteration: v3.5 (Continuous Image Integration)
## Wait, why?
gelbooru contains a lot of images, however not all of them are in the same region as south eas asia. As such, to filter out such images we have created a classifier that in theory learns the differences between western (USA/Europe/etc.) and not western (Japan/China/SEA).
The definition of "Not Western" is limited to the the asian region (Japan, Korea, China, Taiwan, Thailand and the surroundign region). The author believes that the art is similar enough with the same "style" which he personally prefers over western art.
## Intended uses & limitations
filter gelbooru data on 🤠 or not 🤠
## Training and evaluation data
Selected 358 images of 🤠 and not 🤠.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 802565
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7384 | 1.0 | 152 | 0.4268 | 0.7963 |
| 0.2888 | 2.0 | 304 | 0.4250 | 0.7778 |
| 0.2953 | 3.0 | 456 | 0.4250 | 0.7778 |
| 0.4914 | 4.0 | 608 | 0.4250 | 0.7778 |
| 0.4099 | 5.0 | 760 | 0.4250 | 0.7778 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Lau123/distilbert-base-uncased-detect_ai_generated_text
|
Lau123
| 2023-12-20T19:06:51Z | 31 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-20T19:00:36Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-detect_ai_generated_text
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-detect_ai_generated_text
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0089
- Accuracy: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|
| 0.2078 | 1.0 | 47 | 0.0219 | 1.0 | 1.0 |
| 0.0153 | 2.0 | 94 | 0.0089 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Tokenizers 0.15.0
|
SkyR/roberta-base-ours-run-1
|
SkyR
| 2023-12-20T18:57:28Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-12T15:35:19Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
base_model: roberta-base
model-index:
- name: run-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# run-1
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3480
- Accuracy: 0.73
- Precision: 0.6930
- Recall: 0.6829
- F1: 0.6871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.0042 | 1.0 | 50 | 0.8281 | 0.665 | 0.6105 | 0.6240 | 0.6016 |
| 0.8062 | 2.0 | 100 | 0.9313 | 0.665 | 0.6513 | 0.6069 | 0.5505 |
| 0.627 | 3.0 | 150 | 0.8275 | 0.72 | 0.6713 | 0.6598 | 0.6638 |
| 0.4692 | 4.0 | 200 | 0.8289 | 0.68 | 0.6368 | 0.6447 | 0.6398 |
| 0.2766 | 5.0 | 250 | 1.1263 | 0.72 | 0.6893 | 0.6431 | 0.6417 |
| 0.1868 | 6.0 | 300 | 1.2901 | 0.725 | 0.6823 | 0.6727 | 0.6764 |
| 0.1054 | 7.0 | 350 | 1.6742 | 0.68 | 0.6696 | 0.6427 | 0.6384 |
| 0.0837 | 8.0 | 400 | 1.6199 | 0.72 | 0.6826 | 0.6735 | 0.6772 |
| 0.0451 | 9.0 | 450 | 1.8324 | 0.735 | 0.7029 | 0.6726 | 0.6727 |
| 0.0532 | 10.0 | 500 | 2.1136 | 0.705 | 0.6949 | 0.6725 | 0.6671 |
| 0.0178 | 11.0 | 550 | 2.1136 | 0.73 | 0.6931 | 0.6810 | 0.6832 |
| 0.0111 | 12.0 | 600 | 2.2740 | 0.69 | 0.6505 | 0.6430 | 0.6461 |
| 0.0205 | 13.0 | 650 | 2.3026 | 0.725 | 0.6965 | 0.6685 | 0.6716 |
| 0.0181 | 14.0 | 700 | 2.2901 | 0.735 | 0.7045 | 0.6806 | 0.6876 |
| 0.0074 | 15.0 | 750 | 2.2277 | 0.74 | 0.7075 | 0.6923 | 0.6978 |
| 0.0063 | 16.0 | 800 | 2.2720 | 0.75 | 0.7229 | 0.7051 | 0.7105 |
| 0.0156 | 17.0 | 850 | 2.1237 | 0.73 | 0.6908 | 0.6841 | 0.6854 |
| 0.0027 | 18.0 | 900 | 2.2376 | 0.73 | 0.6936 | 0.6837 | 0.6874 |
| 0.003 | 19.0 | 950 | 2.3359 | 0.735 | 0.6992 | 0.6897 | 0.6937 |
| 0.0012 | 20.0 | 1000 | 2.3480 | 0.73 | 0.6930 | 0.6829 | 0.6871 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
SkyR/roberta-base-ours-run-3
|
SkyR
| 2023-12-20T18:57:10Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-12T16:24:16Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
base_model: roberta-base
model-index:
- name: run-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# run-3
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4223
- Accuracy: 0.75
- Precision: 0.7128
- Recall: 0.6998
- F1: 0.7043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.0025 | 1.0 | 50 | 0.8925 | 0.63 | 0.6703 | 0.5704 | 0.5060 |
| 0.8187 | 2.0 | 100 | 0.7915 | 0.595 | 0.6007 | 0.5926 | 0.5344 |
| 0.5671 | 3.0 | 150 | 0.9573 | 0.695 | 0.6520 | 0.6350 | 0.6380 |
| 0.3218 | 4.0 | 200 | 0.9195 | 0.68 | 0.6447 | 0.6539 | 0.6461 |
| 0.2208 | 5.0 | 250 | 1.2429 | 0.715 | 0.6801 | 0.6617 | 0.6663 |
| 0.1614 | 6.0 | 300 | 1.5295 | 0.71 | 0.6736 | 0.6543 | 0.6423 |
| 0.1129 | 7.0 | 350 | 2.1055 | 0.71 | 0.6779 | 0.6413 | 0.6511 |
| 0.098 | 8.0 | 400 | 1.9579 | 0.705 | 0.6697 | 0.6558 | 0.6601 |
| 0.0479 | 9.0 | 450 | 2.0535 | 0.72 | 0.6794 | 0.6663 | 0.6711 |
| 0.0173 | 10.0 | 500 | 2.5381 | 0.7 | 0.6838 | 0.6604 | 0.6608 |
| 0.0308 | 11.0 | 550 | 2.4592 | 0.735 | 0.7014 | 0.6851 | 0.6901 |
| 0.0265 | 12.0 | 600 | 2.3131 | 0.725 | 0.6910 | 0.6845 | 0.6849 |
| 0.016 | 13.0 | 650 | 2.4025 | 0.74 | 0.7035 | 0.6915 | 0.6949 |
| 0.013 | 14.0 | 700 | 2.3933 | 0.745 | 0.7070 | 0.6831 | 0.6909 |
| 0.016 | 15.0 | 750 | 2.6819 | 0.725 | 0.7006 | 0.6738 | 0.6759 |
| 0.0126 | 16.0 | 800 | 2.3679 | 0.74 | 0.7050 | 0.6839 | 0.6898 |
| 0.0023 | 17.0 | 850 | 2.5252 | 0.745 | 0.7119 | 0.6880 | 0.6933 |
| 0.01 | 18.0 | 900 | 2.5598 | 0.74 | 0.7056 | 0.6828 | 0.6906 |
| 0.0093 | 19.0 | 950 | 2.4353 | 0.745 | 0.7057 | 0.6922 | 0.6974 |
| 0.0039 | 20.0 | 1000 | 2.4223 | 0.75 | 0.7128 | 0.6998 | 0.7043 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
SkyR/roberta-base-ours-run-4
|
SkyR
| 2023-12-20T18:57:05Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-12T16:48:18Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
base_model: roberta-base
model-index:
- name: run-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# run-4
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6296
- Accuracy: 0.685
- Precision: 0.6248
- Recall: 0.6164
- F1: 0.6188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.0195 | 1.0 | 50 | 0.8393 | 0.615 | 0.4126 | 0.5619 | 0.4606 |
| 0.7594 | 2.0 | 100 | 0.7077 | 0.7 | 0.6896 | 0.6663 | 0.6178 |
| 0.5515 | 3.0 | 150 | 0.9342 | 0.68 | 0.6334 | 0.5989 | 0.6016 |
| 0.3739 | 4.0 | 200 | 0.7755 | 0.735 | 0.7032 | 0.7164 | 0.7063 |
| 0.2648 | 5.0 | 250 | 0.9200 | 0.7 | 0.6584 | 0.6677 | 0.6611 |
| 0.1726 | 6.0 | 300 | 1.1898 | 0.71 | 0.6653 | 0.6550 | 0.6570 |
| 0.1452 | 7.0 | 350 | 1.5086 | 0.73 | 0.6884 | 0.6768 | 0.6812 |
| 0.0856 | 8.0 | 400 | 2.6159 | 0.68 | 0.6754 | 0.5863 | 0.5951 |
| 0.1329 | 9.0 | 450 | 1.9491 | 0.71 | 0.6692 | 0.6442 | 0.6463 |
| 0.0322 | 10.0 | 500 | 1.7897 | 0.74 | 0.6977 | 0.6939 | 0.6946 |
| 0.0345 | 11.0 | 550 | 1.9100 | 0.725 | 0.6827 | 0.6853 | 0.6781 |
| 0.026 | 12.0 | 600 | 2.5041 | 0.68 | 0.6246 | 0.6115 | 0.6137 |
| 0.0084 | 13.0 | 650 | 2.5343 | 0.715 | 0.6708 | 0.6617 | 0.6637 |
| 0.0145 | 14.0 | 700 | 2.4112 | 0.715 | 0.6643 | 0.6595 | 0.6614 |
| 0.0119 | 15.0 | 750 | 2.5303 | 0.705 | 0.6479 | 0.6359 | 0.6390 |
| 0.0026 | 16.0 | 800 | 2.6299 | 0.705 | 0.6552 | 0.6447 | 0.6455 |
| 0.0077 | 17.0 | 850 | 2.4044 | 0.715 | 0.6667 | 0.6576 | 0.6596 |
| 0.0055 | 18.0 | 900 | 2.8077 | 0.68 | 0.6208 | 0.6065 | 0.6098 |
| 0.0078 | 19.0 | 950 | 2.5608 | 0.68 | 0.6200 | 0.6104 | 0.6129 |
| 0.0018 | 20.0 | 1000 | 2.6296 | 0.685 | 0.6248 | 0.6164 | 0.6188 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
SkyR/roberta-base-ours-run-5
|
SkyR
| 2023-12-20T18:56:57Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-12T17:12:42Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
base_model: roberta-base
model-index:
- name: run-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# run-5
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2694
- Accuracy: 0.745
- Precision: 0.7091
- Recall: 0.7017
- F1: 0.7043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.9558 | 1.0 | 50 | 0.8587 | 0.665 | 0.6541 | 0.6084 | 0.5787 |
| 0.7752 | 2.0 | 100 | 0.8892 | 0.655 | 0.6416 | 0.5835 | 0.5790 |
| 0.5771 | 3.0 | 150 | 0.7066 | 0.715 | 0.6884 | 0.7026 | 0.6915 |
| 0.3738 | 4.0 | 200 | 1.0130 | 0.705 | 0.6578 | 0.6409 | 0.6455 |
| 0.253 | 5.0 | 250 | 1.1405 | 0.74 | 0.7132 | 0.7018 | 0.7059 |
| 0.1604 | 6.0 | 300 | 1.1993 | 0.69 | 0.6334 | 0.6244 | 0.6261 |
| 0.1265 | 7.0 | 350 | 1.5984 | 0.705 | 0.6875 | 0.6775 | 0.6764 |
| 0.0741 | 8.0 | 400 | 1.4755 | 0.745 | 0.7116 | 0.7132 | 0.7114 |
| 0.0505 | 9.0 | 450 | 2.2514 | 0.71 | 0.6791 | 0.6427 | 0.6524 |
| 0.0372 | 10.0 | 500 | 2.2234 | 0.71 | 0.6675 | 0.6503 | 0.6488 |
| 0.0161 | 11.0 | 550 | 2.1070 | 0.72 | 0.6783 | 0.6712 | 0.6718 |
| 0.016 | 12.0 | 600 | 2.0232 | 0.72 | 0.6737 | 0.6659 | 0.6688 |
| 0.0197 | 13.0 | 650 | 2.0224 | 0.74 | 0.7065 | 0.6954 | 0.6895 |
| 0.01 | 14.0 | 700 | 2.1777 | 0.74 | 0.7023 | 0.6904 | 0.6936 |
| 0.0173 | 15.0 | 750 | 2.3227 | 0.72 | 0.6761 | 0.6590 | 0.6638 |
| 0.0066 | 16.0 | 800 | 2.2131 | 0.735 | 0.6983 | 0.6912 | 0.6923 |
| 0.0043 | 17.0 | 850 | 2.1196 | 0.76 | 0.7278 | 0.7207 | 0.7191 |
| 0.0039 | 18.0 | 900 | 2.4087 | 0.72 | 0.6791 | 0.6590 | 0.6650 |
| 0.0041 | 19.0 | 950 | 2.1487 | 0.73 | 0.6889 | 0.6860 | 0.6873 |
| 0.0024 | 20.0 | 1000 | 2.2694 | 0.745 | 0.7091 | 0.7017 | 0.7043 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
SkyR/albert-base-ours-run-1
|
SkyR
| 2023-12-20T18:55:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-12T18:16:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
base_model: albert-base-v2
model-index:
- name: albert-base-ours-run-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-ours-run-1
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3970
- Accuracy: 0.735
- Precision: 0.7033
- Recall: 0.6790
- F1: 0.6873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.9719 | 1.0 | 200 | 0.8460 | 0.635 | 0.6534 | 0.5920 | 0.5547 |
| 0.7793 | 2.0 | 400 | 0.7762 | 0.675 | 0.6965 | 0.6323 | 0.5936 |
| 0.5734 | 3.0 | 600 | 0.8149 | 0.67 | 0.6200 | 0.6192 | 0.6196 |
| 0.3877 | 4.0 | 800 | 0.9555 | 0.7 | 0.6724 | 0.6482 | 0.6549 |
| 0.2426 | 5.0 | 1000 | 1.1248 | 0.695 | 0.6529 | 0.6437 | 0.6452 |
| 0.183 | 6.0 | 1200 | 1.3497 | 0.705 | 0.6717 | 0.6489 | 0.6563 |
| 0.1011 | 7.0 | 1400 | 1.6369 | 0.7 | 0.6620 | 0.6532 | 0.6560 |
| 0.0602 | 8.0 | 1600 | 1.8171 | 0.7 | 0.6763 | 0.6615 | 0.6654 |
| 0.0335 | 9.0 | 1800 | 1.9601 | 0.695 | 0.6640 | 0.6490 | 0.6545 |
| 0.0158 | 10.0 | 2000 | 2.0206 | 0.71 | 0.6802 | 0.6751 | 0.6768 |
| 0.0148 | 11.0 | 2200 | 2.0881 | 0.675 | 0.6252 | 0.6242 | 0.6232 |
| 0.0057 | 12.0 | 2400 | 2.2708 | 0.735 | 0.7146 | 0.6790 | 0.6904 |
| 0.0079 | 13.0 | 2600 | 2.2348 | 0.72 | 0.6917 | 0.6659 | 0.6746 |
| 0.0018 | 14.0 | 2800 | 2.2978 | 0.725 | 0.6968 | 0.6662 | 0.6761 |
| 0.0025 | 15.0 | 3000 | 2.3180 | 0.735 | 0.7067 | 0.6790 | 0.6883 |
| 0.0028 | 16.0 | 3200 | 2.3910 | 0.74 | 0.7153 | 0.6854 | 0.6953 |
| 0.0002 | 17.0 | 3400 | 2.3830 | 0.735 | 0.7033 | 0.6790 | 0.6873 |
| 0.0002 | 18.0 | 3600 | 2.3899 | 0.735 | 0.7033 | 0.6790 | 0.6873 |
| 0.0001 | 19.0 | 3800 | 2.3922 | 0.735 | 0.7033 | 0.6790 | 0.6873 |
| 0.0001 | 20.0 | 4000 | 2.3970 | 0.735 | 0.7033 | 0.6790 | 0.6873 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Tokenizers 0.13.2
|
SkyR/albert-base-ours-run-3
|
SkyR
| 2023-12-20T18:55:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-12T19:02:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
base_model: albert-base-v2
model-index:
- name: albert-base-ours-run-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-ours-run-3
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4381
- Accuracy: 0.7
- Precision: 0.6579
- Recall: 0.6558
- F1: 0.6568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.9876 | 1.0 | 200 | 0.9367 | 0.64 | 0.6707 | 0.5623 | 0.5425 |
| 0.7553 | 2.0 | 400 | 0.7936 | 0.66 | 0.6269 | 0.6298 | 0.6105 |
| 0.556 | 3.0 | 600 | 0.9257 | 0.71 | 0.6759 | 0.6504 | 0.6563 |
| 0.3871 | 4.0 | 800 | 0.9893 | 0.63 | 0.5882 | 0.5985 | 0.5876 |
| 0.2446 | 5.0 | 1000 | 1.1867 | 0.695 | 0.6582 | 0.6563 | 0.6566 |
| 0.1502 | 6.0 | 1200 | 1.6108 | 0.71 | 0.6708 | 0.6523 | 0.6585 |
| 0.1049 | 7.0 | 1400 | 2.4882 | 0.645 | 0.6030 | 0.5597 | 0.5649 |
| 0.0764 | 8.0 | 1600 | 2.0064 | 0.715 | 0.6798 | 0.6602 | 0.6651 |
| 0.032 | 9.0 | 1800 | 2.6447 | 0.655 | 0.5913 | 0.5774 | 0.5727 |
| 0.0177 | 10.0 | 2000 | 2.2460 | 0.675 | 0.6290 | 0.6287 | 0.6287 |
| 0.0153 | 11.0 | 2200 | 2.3537 | 0.69 | 0.6524 | 0.6407 | 0.6408 |
| 0.006 | 12.0 | 2400 | 2.4205 | 0.695 | 0.6582 | 0.6448 | 0.6486 |
| 0.0045 | 13.0 | 2600 | 2.3032 | 0.68 | 0.6394 | 0.6314 | 0.6287 |
| 0.0038 | 14.0 | 2800 | 2.3506 | 0.685 | 0.6388 | 0.6370 | 0.6367 |
| 0.0034 | 15.0 | 3000 | 2.3750 | 0.7 | 0.6590 | 0.6558 | 0.6573 |
| 0.0019 | 16.0 | 3200 | 2.4289 | 0.72 | 0.6819 | 0.6723 | 0.6763 |
| 0.0016 | 17.0 | 3400 | 2.4470 | 0.725 | 0.6892 | 0.6788 | 0.6830 |
| 0.0002 | 18.0 | 3600 | 2.4374 | 0.71 | 0.6700 | 0.6626 | 0.6657 |
| 0.0002 | 19.0 | 3800 | 2.4353 | 0.7 | 0.6579 | 0.6558 | 0.6568 |
| 0.0002 | 20.0 | 4000 | 2.4381 | 0.7 | 0.6579 | 0.6558 | 0.6568 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Tokenizers 0.13.2
|
SkyR/albert-base-ours-run-4
|
SkyR
| 2023-12-20T18:55:46Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-12T19:25:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
base_model: albert-base-v2
model-index:
- name: albert-base-ours-run-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-ours-run-4
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9565
- Accuracy: 0.72
- Precision: 0.6790
- Recall: 0.6770
- F1: 0.6766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.0253 | 1.0 | 200 | 0.8974 | 0.605 | 0.7186 | 0.5341 | 0.4555 |
| 0.8121 | 2.0 | 400 | 0.8260 | 0.675 | 0.6792 | 0.6308 | 0.6112 |
| 0.6153 | 3.0 | 600 | 0.8504 | 0.66 | 0.6180 | 0.6026 | 0.6073 |
| 0.441 | 4.0 | 800 | 0.8917 | 0.685 | 0.6463 | 0.6385 | 0.6403 |
| 0.3273 | 5.0 | 1000 | 0.9384 | 0.69 | 0.6534 | 0.6602 | 0.6561 |
| 0.2138 | 6.0 | 1200 | 1.3501 | 0.705 | 0.6573 | 0.6374 | 0.6388 |
| 0.1435 | 7.0 | 1400 | 1.4614 | 0.71 | 0.6693 | 0.6553 | 0.6601 |
| 0.1202 | 8.0 | 1600 | 1.5825 | 0.7 | 0.6648 | 0.6592 | 0.6530 |
| 0.0587 | 9.0 | 1800 | 1.7755 | 0.72 | 0.6839 | 0.6849 | 0.6840 |
| 0.0237 | 10.0 | 2000 | 1.7240 | 0.735 | 0.6960 | 0.6924 | 0.6940 |
| 0.018 | 11.0 | 2200 | 1.7230 | 0.745 | 0.7105 | 0.7003 | 0.7026 |
| 0.0096 | 12.0 | 2400 | 1.7812 | 0.75 | 0.7225 | 0.7142 | 0.7158 |
| 0.006 | 13.0 | 2600 | 1.8223 | 0.75 | 0.7265 | 0.7082 | 0.7147 |
| 0.0033 | 14.0 | 2800 | 1.9872 | 0.76 | 0.7434 | 0.7107 | 0.7188 |
| 0.003 | 15.0 | 3000 | 1.8818 | 0.72 | 0.6778 | 0.6766 | 0.6765 |
| 0.0027 | 16.0 | 3200 | 1.9816 | 0.75 | 0.7125 | 0.6990 | 0.7043 |
| 0.002 | 17.0 | 3400 | 1.9268 | 0.725 | 0.6832 | 0.6834 | 0.6825 |
| 0.0023 | 18.0 | 3600 | 1.9456 | 0.73 | 0.6913 | 0.6898 | 0.6898 |
| 0.0025 | 19.0 | 3800 | 1.9543 | 0.72 | 0.6790 | 0.6770 | 0.6766 |
| 0.0016 | 20.0 | 4000 | 1.9565 | 0.72 | 0.6790 | 0.6770 | 0.6766 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Tokenizers 0.13.2
|
SkyR/hing-mbert-ours-run-2
|
SkyR
| 2023-12-20T18:55:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:l3cube-pune/hing-mbert",
"base_model:finetune:l3cube-pune/hing-mbert",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-13T19:00:29Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
base_model: l3cube-pune/hing-mbert
model-index:
- name: hing-mbert-ours-run-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hing-mbert-ours-run-2
This model is a fine-tuned version of [l3cube-pune/hing-mbert](https://huggingface.co/l3cube-pune/hing-mbert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3919
- Accuracy: 0.62
- Precision: 0.5759
- Recall: 0.5631
- F1: 0.5669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.0284 | 1.0 | 100 | 1.2914 | 0.595 | 0.5712 | 0.4800 | 0.4642 |
| 0.8127 | 2.0 | 200 | 0.8552 | 0.59 | 0.5744 | 0.5675 | 0.4891 |
| 0.5499 | 3.0 | 300 | 1.1212 | 0.645 | 0.6544 | 0.5600 | 0.5475 |
| 0.3433 | 4.0 | 400 | 1.2017 | 0.605 | 0.5872 | 0.5866 | 0.5791 |
| 0.2218 | 5.0 | 500 | 1.8329 | 0.655 | 0.6458 | 0.6064 | 0.6055 |
| 0.1763 | 6.0 | 600 | 2.4194 | 0.655 | 0.6140 | 0.5802 | 0.5871 |
| 0.1022 | 7.0 | 700 | 2.3894 | 0.66 | 0.6171 | 0.6045 | 0.6048 |
| 0.0631 | 8.0 | 800 | 2.8259 | 0.605 | 0.5704 | 0.5255 | 0.5259 |
| 0.0254 | 9.0 | 900 | 2.9135 | 0.65 | 0.6013 | 0.5734 | 0.5784 |
| 0.0316 | 10.0 | 1000 | 3.0548 | 0.62 | 0.5862 | 0.5650 | 0.5670 |
| 0.026 | 11.0 | 1100 | 3.1020 | 0.62 | 0.5722 | 0.5593 | 0.5619 |
| 0.0152 | 12.0 | 1200 | 3.0692 | 0.62 | 0.5685 | 0.5597 | 0.5621 |
| 0.0156 | 13.0 | 1300 | 3.1068 | 0.615 | 0.5771 | 0.5589 | 0.5624 |
| 0.0237 | 14.0 | 1400 | 3.3487 | 0.62 | 0.5924 | 0.5589 | 0.5642 |
| 0.0094 | 15.0 | 1500 | 3.2007 | 0.615 | 0.5665 | 0.5639 | 0.5650 |
| 0.0054 | 16.0 | 1600 | 3.2838 | 0.62 | 0.5807 | 0.5657 | 0.5690 |
| 0.005 | 17.0 | 1700 | 3.2258 | 0.615 | 0.5846 | 0.5723 | 0.5747 |
| 0.005 | 18.0 | 1800 | 3.3572 | 0.63 | 0.5827 | 0.5698 | 0.5736 |
| 0.0022 | 19.0 | 1900 | 3.3642 | 0.62 | 0.5759 | 0.5631 | 0.5669 |
| 0.0019 | 20.0 | 2000 | 3.3919 | 0.62 | 0.5759 | 0.5631 | 0.5669 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Tokenizers 0.13.2
|
SkyR/hing-mbert-ours-run-3
|
SkyR
| 2023-12-20T18:54:51Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:l3cube-pune/hing-mbert",
"base_model:finetune:l3cube-pune/hing-mbert",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-13T19:30:40Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
base_model: l3cube-pune/hing-mbert
model-index:
- name: hing-mbert-ours-run-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hing-mbert-ours-run-3
This model is a fine-tuned version of [l3cube-pune/hing-mbert](https://huggingface.co/l3cube-pune/hing-mbert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9769
- Accuracy: 0.675
- Precision: 0.6433
- Recall: 0.6344
- F1: 0.6344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.9089 | 1.0 | 100 | 1.0993 | 0.635 | 0.6487 | 0.5304 | 0.5060 |
| 0.6657 | 2.0 | 200 | 0.8138 | 0.645 | 0.6550 | 0.6482 | 0.6234 |
| 0.3858 | 3.0 | 300 | 1.1334 | 0.665 | 0.6162 | 0.6061 | 0.5995 |
| 0.208 | 4.0 | 400 | 1.9041 | 0.685 | 0.6488 | 0.6169 | 0.6087 |
| 0.0996 | 5.0 | 500 | 2.3735 | 0.645 | 0.5867 | 0.5781 | 0.5794 |
| 0.0296 | 6.0 | 600 | 2.5772 | 0.665 | 0.6284 | 0.6208 | 0.6198 |
| 0.0623 | 7.0 | 700 | 2.8906 | 0.655 | 0.6040 | 0.5916 | 0.5926 |
| 0.0395 | 8.0 | 800 | 2.6567 | 0.675 | 0.6279 | 0.6254 | 0.6219 |
| 0.029 | 9.0 | 900 | 2.9277 | 0.655 | 0.6208 | 0.5950 | 0.5991 |
| 0.0194 | 10.0 | 1000 | 2.7362 | 0.665 | 0.6241 | 0.6208 | 0.6190 |
| 0.0092 | 11.0 | 1100 | 2.5561 | 0.68 | 0.6396 | 0.6401 | 0.6385 |
| 0.0059 | 12.0 | 1200 | 3.1112 | 0.675 | 0.6350 | 0.5967 | 0.6042 |
| 0.0133 | 13.0 | 1300 | 2.5269 | 0.685 | 0.6520 | 0.6607 | 0.6519 |
| 0.0051 | 14.0 | 1400 | 2.8736 | 0.68 | 0.6381 | 0.6158 | 0.6134 |
| 0.0044 | 15.0 | 1500 | 2.9132 | 0.675 | 0.6336 | 0.6180 | 0.6200 |
| 0.0029 | 16.0 | 1600 | 2.8701 | 0.675 | 0.6337 | 0.6214 | 0.6233 |
| 0.0015 | 17.0 | 1700 | 2.8115 | 0.69 | 0.6475 | 0.6388 | 0.6420 |
| 0.0019 | 18.0 | 1800 | 2.9517 | 0.67 | 0.6373 | 0.6276 | 0.6273 |
| 0.0013 | 19.0 | 1900 | 2.9633 | 0.67 | 0.6373 | 0.6276 | 0.6273 |
| 0.0007 | 20.0 | 2000 | 2.9769 | 0.675 | 0.6433 | 0.6344 | 0.6344 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Tokenizers 0.13.2
|
SkyR/hing-mbert-ours-run-4
|
SkyR
| 2023-12-20T18:54:40Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:l3cube-pune/hing-mbert",
"base_model:finetune:l3cube-pune/hing-mbert",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-13T20:00:21Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
base_model: l3cube-pune/hing-mbert
model-index:
- name: hing-mbert-ours-run-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hing-mbert-ours-run-4
This model is a fine-tuned version of [l3cube-pune/hing-mbert](https://huggingface.co/l3cube-pune/hing-mbert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0173
- Accuracy: 0.68
- Precision: 0.6330
- Recall: 0.6325
- F1: 0.6320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.9781 | 1.0 | 100 | 0.8852 | 0.55 | 0.4044 | 0.5284 | 0.4211 |
| 0.7568 | 2.0 | 200 | 0.8110 | 0.655 | 0.5994 | 0.6013 | 0.5762 |
| 0.5121 | 3.0 | 300 | 0.9735 | 0.65 | 0.6145 | 0.6131 | 0.5965 |
| 0.314 | 4.0 | 400 | 1.1324 | 0.65 | 0.6305 | 0.6355 | 0.6266 |
| 0.1298 | 5.0 | 500 | 2.8247 | 0.61 | 0.5804 | 0.5087 | 0.5092 |
| 0.1013 | 6.0 | 600 | 2.8183 | 0.635 | 0.6212 | 0.5674 | 0.5667 |
| 0.0989 | 7.0 | 700 | 2.3235 | 0.635 | 0.5944 | 0.5922 | 0.5916 |
| 0.0481 | 8.0 | 800 | 2.5240 | 0.68 | 0.6334 | 0.6172 | 0.6221 |
| 0.018 | 9.0 | 900 | 2.6782 | 0.65 | 0.6123 | 0.6054 | 0.6062 |
| 0.0285 | 10.0 | 1000 | 2.3400 | 0.67 | 0.6206 | 0.6327 | 0.6189 |
| 0.014 | 11.0 | 1100 | 2.6558 | 0.66 | 0.6098 | 0.5992 | 0.6018 |
| 0.0085 | 12.0 | 1200 | 2.9366 | 0.66 | 0.6076 | 0.5961 | 0.5991 |
| 0.0106 | 13.0 | 1300 | 2.8567 | 0.665 | 0.6198 | 0.6193 | 0.6186 |
| 0.0097 | 14.0 | 1400 | 3.1526 | 0.64 | 0.6089 | 0.5975 | 0.5954 |
| 0.0022 | 15.0 | 1500 | 2.7305 | 0.69 | 0.6404 | 0.6404 | 0.6398 |
| 0.0016 | 16.0 | 1600 | 2.7670 | 0.69 | 0.6418 | 0.6434 | 0.6425 |
| 0.0017 | 17.0 | 1700 | 2.8193 | 0.7 | 0.6533 | 0.6566 | 0.6546 |
| 0.0009 | 18.0 | 1800 | 2.9959 | 0.685 | 0.6400 | 0.6389 | 0.6384 |
| 0.0006 | 19.0 | 1900 | 3.0153 | 0.68 | 0.6330 | 0.6325 | 0.6320 |
| 0.0005 | 20.0 | 2000 | 3.0173 | 0.68 | 0.6330 | 0.6325 | 0.6320 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Tokenizers 0.13.2
|
SetFit/deberta-v3-large__sst2__train-16-9
|
SetFit
| 2023-12-20T18:42:21Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: microsoft/deberta-v3-large
model-index:
- name: deberta-v3-large__sst2__train-16-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-16-9
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2598
- Accuracy: 0.7809
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6887 | 1.0 | 7 | 0.7452 | 0.2857 |
| 0.6889 | 2.0 | 14 | 0.7988 | 0.2857 |
| 0.6501 | 3.0 | 21 | 0.8987 | 0.2857 |
| 0.4286 | 4.0 | 28 | 0.9186 | 0.4286 |
| 0.3591 | 5.0 | 35 | 0.5566 | 0.7143 |
| 0.0339 | 6.0 | 42 | 1.1130 | 0.5714 |
| 0.013 | 7.0 | 49 | 1.8296 | 0.7143 |
| 0.0041 | 8.0 | 56 | 1.7069 | 0.7143 |
| 0.0023 | 9.0 | 63 | 1.1942 | 0.7143 |
| 0.0022 | 10.0 | 70 | 0.6054 | 0.7143 |
| 0.0011 | 11.0 | 77 | 0.3872 | 0.7143 |
| 0.0006 | 12.0 | 84 | 0.3217 | 0.7143 |
| 0.0005 | 13.0 | 91 | 0.2879 | 0.8571 |
| 0.0005 | 14.0 | 98 | 0.2640 | 0.8571 |
| 0.0004 | 15.0 | 105 | 0.2531 | 0.8571 |
| 0.0003 | 16.0 | 112 | 0.2384 | 0.8571 |
| 0.0004 | 17.0 | 119 | 0.2338 | 0.8571 |
| 0.0003 | 18.0 | 126 | 0.2314 | 0.8571 |
| 0.0003 | 19.0 | 133 | 0.2276 | 0.8571 |
| 0.0003 | 20.0 | 140 | 0.2172 | 0.8571 |
| 0.0003 | 21.0 | 147 | 0.2069 | 0.8571 |
| 0.0002 | 22.0 | 154 | 0.2018 | 0.8571 |
| 0.0002 | 23.0 | 161 | 0.2005 | 0.8571 |
| 0.0002 | 24.0 | 168 | 0.1985 | 0.8571 |
| 0.0002 | 25.0 | 175 | 0.1985 | 1.0 |
| 0.0002 | 26.0 | 182 | 0.1955 | 1.0 |
| 0.0002 | 27.0 | 189 | 0.1967 | 1.0 |
| 0.0002 | 28.0 | 196 | 0.1918 | 1.0 |
| 0.0002 | 29.0 | 203 | 0.1888 | 1.0 |
| 0.0002 | 30.0 | 210 | 0.1864 | 1.0 |
| 0.0002 | 31.0 | 217 | 0.1870 | 1.0 |
| 0.0002 | 32.0 | 224 | 0.1892 | 1.0 |
| 0.0002 | 33.0 | 231 | 0.1917 | 1.0 |
| 0.0002 | 34.0 | 238 | 0.1869 | 1.0 |
| 0.0002 | 35.0 | 245 | 0.1812 | 1.0 |
| 0.0001 | 36.0 | 252 | 0.1777 | 1.0 |
| 0.0002 | 37.0 | 259 | 0.1798 | 1.0 |
| 0.0002 | 38.0 | 266 | 0.1824 | 0.8571 |
| 0.0002 | 39.0 | 273 | 0.1846 | 0.8571 |
| 0.0002 | 40.0 | 280 | 0.1839 | 0.8571 |
| 0.0001 | 41.0 | 287 | 0.1826 | 0.8571 |
| 0.0001 | 42.0 | 294 | 0.1779 | 0.8571 |
| 0.0002 | 43.0 | 301 | 0.1762 | 0.8571 |
| 0.0001 | 44.0 | 308 | 0.1742 | 1.0 |
| 0.0002 | 45.0 | 315 | 0.1708 | 1.0 |
| 0.0001 | 46.0 | 322 | 0.1702 | 1.0 |
| 0.0001 | 47.0 | 329 | 0.1699 | 1.0 |
| 0.0001 | 48.0 | 336 | 0.1695 | 1.0 |
| 0.0001 | 49.0 | 343 | 0.1683 | 1.0 |
| 0.0001 | 50.0 | 350 | 0.1681 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-16-4
|
SetFit
| 2023-12-20T18:42:15Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: microsoft/deberta-v3-large
model-index:
- name: deberta-v3-large__sst2__train-16-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-16-4
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6329
- Accuracy: 0.6392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6945 | 1.0 | 7 | 0.7381 | 0.2857 |
| 0.7072 | 2.0 | 14 | 0.7465 | 0.2857 |
| 0.6548 | 3.0 | 21 | 0.7277 | 0.4286 |
| 0.5695 | 4.0 | 28 | 0.6738 | 0.5714 |
| 0.4615 | 5.0 | 35 | 0.8559 | 0.5714 |
| 0.0823 | 6.0 | 42 | 1.0983 | 0.5714 |
| 0.0274 | 7.0 | 49 | 1.9937 | 0.5714 |
| 0.0106 | 8.0 | 56 | 2.2209 | 0.5714 |
| 0.0039 | 9.0 | 63 | 2.2114 | 0.5714 |
| 0.0031 | 10.0 | 70 | 2.2808 | 0.5714 |
| 0.0013 | 11.0 | 77 | 2.3707 | 0.5714 |
| 0.0008 | 12.0 | 84 | 2.4902 | 0.5714 |
| 0.0005 | 13.0 | 91 | 2.5208 | 0.5714 |
| 0.0007 | 14.0 | 98 | 2.5683 | 0.5714 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-16-2
|
SetFit
| 2023-12-20T18:42:11Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: microsoft/deberta-v3-large
model-index:
- name: deberta-v3-large__sst2__train-16-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-16-2
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6959
- Accuracy: 0.5008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7079 | 1.0 | 7 | 0.7361 | 0.2857 |
| 0.6815 | 2.0 | 14 | 0.7659 | 0.2857 |
| 0.6938 | 3.0 | 21 | 0.7944 | 0.2857 |
| 0.4584 | 4.0 | 28 | 1.2441 | 0.2857 |
| 0.4949 | 5.0 | 35 | 1.2285 | 0.5714 |
| 0.0574 | 6.0 | 42 | 1.7796 | 0.5714 |
| 0.0156 | 7.0 | 49 | 2.6027 | 0.5714 |
| 0.0051 | 8.0 | 56 | 2.8717 | 0.5714 |
| 0.0017 | 9.0 | 63 | 2.8491 | 0.5714 |
| 0.0023 | 10.0 | 70 | 1.7149 | 0.7143 |
| 0.001 | 11.0 | 77 | 1.1101 | 0.7143 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-16-1
|
SetFit
| 2023-12-20T18:42:08Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: microsoft/deberta-v3-large
model-index:
- name: deberta-v3-large__sst2__train-16-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-16-1
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6804
- Accuracy: 0.5497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7086 | 1.0 | 7 | 0.7176 | 0.2857 |
| 0.6897 | 2.0 | 14 | 0.7057 | 0.2857 |
| 0.6491 | 3.0 | 21 | 0.6582 | 0.8571 |
| 0.567 | 4.0 | 28 | 0.4480 | 0.8571 |
| 0.4304 | 5.0 | 35 | 0.5465 | 0.7143 |
| 0.0684 | 6.0 | 42 | 0.5408 | 0.8571 |
| 0.0339 | 7.0 | 49 | 0.6501 | 0.8571 |
| 0.0082 | 8.0 | 56 | 0.9152 | 0.8571 |
| 0.0067 | 9.0 | 63 | 2.5162 | 0.5714 |
| 0.0045 | 10.0 | 70 | 1.1136 | 0.8571 |
| 0.0012 | 11.0 | 77 | 1.1668 | 0.8571 |
| 0.0007 | 12.0 | 84 | 1.2071 | 0.8571 |
| 0.0005 | 13.0 | 91 | 1.2310 | 0.8571 |
| 0.0006 | 14.0 | 98 | 1.2476 | 0.8571 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__all-train
|
SetFit
| 2023-12-20T18:42:01Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: distilbert-base-uncased
model-index:
- name: distilbert-base-uncased__sst2__all-train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__all-train
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2496
- Accuracy: 0.8962
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3643 | 1.0 | 433 | 0.2496 | 0.8962 |
| 0.196 | 2.0 | 866 | 0.2548 | 0.9110 |
| 0.0915 | 3.0 | 1299 | 0.4483 | 0.8957 |
| 0.0505 | 4.0 | 1732 | 0.4968 | 0.9044 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-16-7
|
SetFit
| 2023-12-20T18:41:56Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: microsoft/deberta-v3-large
model-index:
- name: deberta-v3-large__sst2__train-16-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-16-7
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6953
- Accuracy: 0.5063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6911 | 1.0 | 7 | 0.7455 | 0.2857 |
| 0.6844 | 2.0 | 14 | 0.7242 | 0.2857 |
| 0.6137 | 3.0 | 21 | 0.7341 | 0.4286 |
| 0.3805 | 4.0 | 28 | 1.0217 | 0.4286 |
| 0.2201 | 5.0 | 35 | 1.1437 | 0.2857 |
| 0.0296 | 6.0 | 42 | 1.5997 | 0.4286 |
| 0.0103 | 7.0 | 49 | 2.6835 | 0.4286 |
| 0.0046 | 8.0 | 56 | 3.3521 | 0.4286 |
| 0.002 | 9.0 | 63 | 3.7846 | 0.4286 |
| 0.0017 | 10.0 | 70 | 4.0088 | 0.4286 |
| 0.0018 | 11.0 | 77 | 4.1483 | 0.4286 |
| 0.0006 | 12.0 | 84 | 4.2235 | 0.4286 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
abdulmatinomotoso/distilroberta-topic-classification_5
|
abdulmatinomotoso
| 2023-12-20T18:33:12Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-20T11:00:30Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilroberta-topic-classification_5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-topic-classification_5
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7686
- F1: 0.6337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 12345
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 16
- num_epochs: 10
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.6113 | 1.0 | 1305 | 2.6631 | 0.5832 |
| 2.4032 | 2.0 | 2610 | 2.6335 | 0.5943 |
| 2.3245 | 3.0 | 3915 | 2.6132 | 0.6196 |
| 2.2142 | 4.0 | 5220 | 2.6438 | 0.6226 |
| 2.0364 | 5.0 | 6525 | 2.6559 | 0.6323 |
| 2.03 | 6.0 | 7830 | 2.7057 | 0.6282 |
| 1.9461 | 7.0 | 9135 | 2.7222 | 0.6325 |
| 1.8751 | 8.0 | 10440 | 2.7435 | 0.6302 |
| 1.8463 | 9.0 | 11745 | 2.7668 | 0.6329 |
| 1.9001 | 10.0 | 13050 | 2.7686 | 0.6337 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
chathuru/cicids2018-distilbert
|
chathuru
| 2023-12-20T18:27:14Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-02T13:38:36Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: cicids2018-distilbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cicids2018-distilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3903
- Accuracy: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 6 | 0.4998 | 0.9167 | 0.9231 |
| No log | 2.0 | 12 | 0.3903 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
ryli/distilbert-base-uncased-finetuned-emotion
|
ryli
| 2023-12-20T18:23:13Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-20T17:00:22Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.936
- name: F1
type: f1
value: 0.9358567652903533
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1542
- Accuracy: 0.936
- F1: 0.9359
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1707 | 1.0 | 250 | 0.1707 | 0.9335 | 0.9340 |
| 0.1126 | 2.0 | 500 | 0.1542 | 0.936 | 0.9359 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
olesya2096/llama2-7b_results
|
olesya2096
| 2023-12-20T18:22:58Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-12-20T18:22:31Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-chat-hf
model-index:
- name: llama2-7b_results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7b_results
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
juosilva/llama2-qlora-finetunined-french
|
juosilva
| 2023-12-20T18:18:42Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | 2023-12-20T18:18:35Z |
---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
hkivancoral/smids_10x_deit_base_adamax_0001_fold5
|
hkivancoral
| 2023-12-20T18:12:16Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-20T15:37:37Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_base_adamax_0001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.92
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_adamax_0001_fold5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8747
- Accuracy: 0.92
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1226 | 1.0 | 750 | 0.2826 | 0.89 |
| 0.1 | 2.0 | 1500 | 0.3891 | 0.9017 |
| 0.0601 | 3.0 | 2250 | 0.4750 | 0.9133 |
| 0.0263 | 4.0 | 3000 | 0.4903 | 0.9117 |
| 0.0141 | 5.0 | 3750 | 0.6938 | 0.905 |
| 0.0263 | 6.0 | 4500 | 0.7445 | 0.895 |
| 0.0152 | 7.0 | 5250 | 0.5895 | 0.92 |
| 0.0017 | 8.0 | 6000 | 0.6688 | 0.92 |
| 0.0043 | 9.0 | 6750 | 0.6499 | 0.9167 |
| 0.0 | 10.0 | 7500 | 0.7976 | 0.91 |
| 0.011 | 11.0 | 8250 | 0.6899 | 0.9183 |
| 0.0 | 12.0 | 9000 | 0.6947 | 0.905 |
| 0.0 | 13.0 | 9750 | 0.8213 | 0.9067 |
| 0.0 | 14.0 | 10500 | 0.6853 | 0.9217 |
| 0.0 | 15.0 | 11250 | 0.8220 | 0.9117 |
| 0.0 | 16.0 | 12000 | 0.6897 | 0.9217 |
| 0.0038 | 17.0 | 12750 | 0.7869 | 0.9167 |
| 0.0004 | 18.0 | 13500 | 0.7956 | 0.9233 |
| 0.0031 | 19.0 | 14250 | 0.8745 | 0.9067 |
| 0.0 | 20.0 | 15000 | 0.7439 | 0.92 |
| 0.0 | 21.0 | 15750 | 0.7389 | 0.9233 |
| 0.0 | 22.0 | 16500 | 0.7135 | 0.925 |
| 0.0028 | 23.0 | 17250 | 0.7301 | 0.925 |
| 0.0 | 24.0 | 18000 | 0.7434 | 0.9283 |
| 0.0 | 25.0 | 18750 | 0.7538 | 0.9233 |
| 0.0 | 26.0 | 19500 | 0.7620 | 0.9233 |
| 0.0 | 27.0 | 20250 | 0.7318 | 0.9233 |
| 0.0 | 28.0 | 21000 | 0.7471 | 0.92 |
| 0.0 | 29.0 | 21750 | 0.7685 | 0.9183 |
| 0.0046 | 30.0 | 22500 | 0.8050 | 0.92 |
| 0.0 | 31.0 | 23250 | 0.7778 | 0.92 |
| 0.0 | 32.0 | 24000 | 0.7888 | 0.9217 |
| 0.0 | 33.0 | 24750 | 0.7725 | 0.9233 |
| 0.0 | 34.0 | 25500 | 0.8185 | 0.9167 |
| 0.0 | 35.0 | 26250 | 0.8187 | 0.92 |
| 0.0 | 36.0 | 27000 | 0.8276 | 0.92 |
| 0.0031 | 37.0 | 27750 | 0.8218 | 0.9217 |
| 0.0 | 38.0 | 28500 | 0.8408 | 0.92 |
| 0.0 | 39.0 | 29250 | 0.8462 | 0.9183 |
| 0.0 | 40.0 | 30000 | 0.8525 | 0.92 |
| 0.0 | 41.0 | 30750 | 0.8553 | 0.92 |
| 0.0 | 42.0 | 31500 | 0.8584 | 0.92 |
| 0.0 | 43.0 | 32250 | 0.8634 | 0.9183 |
| 0.0 | 44.0 | 33000 | 0.8639 | 0.92 |
| 0.0 | 45.0 | 33750 | 0.8671 | 0.92 |
| 0.0 | 46.0 | 34500 | 0.8704 | 0.92 |
| 0.0 | 47.0 | 35250 | 0.8722 | 0.92 |
| 0.0 | 48.0 | 36000 | 0.8729 | 0.92 |
| 0.0 | 49.0 | 36750 | 0.8728 | 0.92 |
| 0.0 | 50.0 | 37500 | 0.8747 | 0.92 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
augmxnt/shisa-7b-v1
|
augmxnt
| 2023-12-20T18:11:13Z | 1,526 | 29 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"ja",
"en",
"dataset:augmxnt/ultra-orca-boros-en-ja-v1",
"dataset:Open-Orca/SlimOrca",
"dataset:augmxnt/shisa-en-ja-dpo-v1",
"arxiv:2310.05914",
"arxiv:2305.18290",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-27T17:55:31Z |
---
license: apache-2.0
language:
- ja
- en
datasets:
- augmxnt/ultra-orca-boros-en-ja-v1
- Open-Orca/SlimOrca
- augmxnt/shisa-en-ja-dpo-v1
---
# Shisa 7B

**Shisa 7B** (`shisa-7b-v1`) is a bilingual Japanese and English (JA/EN) general-purpose chat model that aims to achieve strong Japanese language performance while retaining robust English capabilities, using a synthetic-data driven approach.
This model is based on [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) with a custom JA-optimized extended tokenizer that is >2X more efficient in Japanese than Mistral's original tokenizer. The base model was pre-trained for an additional 8B primarily Japanese tokens. It was then subsequently fine-tuned with an expanded, machine-translated version of [airoboros-3.1](https://huggingface.co/datasets/jondurbin/airoboros-3.1), a set of the highest-scoring items from [ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized), and additional freshly generated [airoboros](https://github.com/jondurbin/airoboros) data directly to the target languages.
We also release our base model, datasets, and pipeline code under a permissive Apache 2.0 license which can be used for any purpose, commercial or otherwise:
* [shisa-base-7b-v1](https://huggingface.co/augmxnt/shisa-base-7b-v1) - our base model w/ an extended tokenizer and additional JA pre-training
* [shisa-pretrain-en-ja-v1](https://huggingface.co/datasets/augmxnt/shisa-pretrain-en-ja-v1) - our pre-training data set
* [ultra-orca-boros-en-ja](https://huggingface.co/datasets/augmxnt/ultra-orca-boros-en-ja-v1) - a synthetically generated, machine-translated, programmatically validated JA/EN fine-tuning dataset
* [shisa-en-ja-dpo-v1](https://huggingface.co/datasets/augmxnt/shisa-en-ja-dpo-v1) - Small subset of DPO pairs from ultrafeedback, along with JA DPO pairs using GPT-4 generated items as the chosen value, and outputs from our preliminary 7b model as the rejected values
* [Shisa repository](https://github.com/AUGMXNT/shisa) - this includes our translation, dataset generation, training, and evaluation code
Moreover, we are in the process of publishing extended writeups and more details of our process, including ablation results, testing methodology, and key findings [on our project wiki](https://github.com/AUGMXNT/shisa/wiki) that may be of interest to fellow researchers.
## Fine-Tuning
Our original intuition was to see if we could create a stronger Japanese model using the best [existing public JA training sets](https://github.com/AUGMXNT/shisa/wiki/A-Review-of-Public-Japanese-Training-Sets) and incorporating them. After initial review and testing, however, we decided that focusing solely on translation/generation of our own synthetic datasets could yield superior results with less training.
We compared multiple translation tools and, via manual review, judged that while `gpt-4` almost always delivered the highest quality translations, Google's `text-bison-32k` was a good balance of quality, cost and throughput. Over various iterations, we refined our translation approach to include some additional algorithms for flagging and filtering invalid translations, re-translating and backfilling as necessary.
We also took this project as an opportunity to apply some newer techniques such as incorporating [NEFTune](https://arxiv.org/abs/2310.05914) and [DPO](https://arxiv.org/abs/2305.18290) training.
For our v1 release, we picked from our release candidates based on a significant amount of human preference testing (thousands of generations and multiple rounds of pairwise comparisons). We analyzed our results with both win/loss/draw and [BTL modeling](https://datascience.oneoffcoder.com/btl-model.html) (iLSR) using [choix](https://github.com/lucasmaystre/choix)).
The best candidate model was fine-tuned in a 3-step process:
1. First, the model was fine-tuned on `ultra-orca-boros-en-ja` and SlimOrca ([WandB Log](https://wandb.ai/jondurbin/shisa-7b-v1/runs/k8pfog9d/overview))
2. Next, we add one additional epoch at performed using only a subset of Japanese ultra-orca-boros-en-ja items to enhance JA performance (as SlimOrca from the first step is mostly EN) ([WandB Log](https://wandb.ai/jondurbin/shisa-mega-7b-v1.1/runs/dopsr0o7/overview))
3. Finally, the model was tuned using a DPOTrainer on a small subset of ultrafeedback (EN) and our own JA DPO dataset which uses gpt-4 outputs as the chosen values and outputs from stage 1's prelim model as rejected values. ([WandDB Log](https://wandb.ai/jondurbin/shisa-mega-dpo-7b-v1.1) )
During our training process, we also gained some key insights on [why some existing Japanese models seem to underperform](https://github.com/AUGMXNT/shisa/wiki/A-Review-of-Public-Japanese-Training-Sets#analysis) even versus models that have no additional JA training, and we hope that sharing this analysis will be useful to other teams developing Japanese language models.
While we need to explore this further, as an experimental validation, we applied a version of our fine-tuning set onto an existing base model ("Gamma 7B") and the initial JA MT-Bench results suggests that we can drastically increase functional performance with our tuning approach:
| Model | Score |
| ------------------------------ | ----- |
| shisa-gamma-7b-allsources-v0.4 | 5.65 |
| ja-stablelm-instruct-gamma-7b* | 4.01 |
## Performance
Throughout our training, we did extensive human evaluation for each model to cross-validate our model performance, and we are currently conducting ongoing larger scale manual head-to-head testing between models. Our intention is open up and scale this data collection as we further develop our tools. For more information and updates, please see our [project wiki](https://github.com/AUGMXNT/shisa/wiki).
While we believe [llm-jp-eval](https://github.com/llm-jp/llm-jp-eval) is a useful metric for our [base model](https://huggingface.co/augmxnt/shisa-base-7b-v1), and it was extremely useful during our tuning process for initial validations, as our fine-tune training includes a percentage of the benchmark train splits, we provide these llm-jp-eval results primarily as a point of interest:
| AVR | MC | NLI | QA | RC |
|-------|-------|-------|-------|-------|
| 0.7480| 0.8900| 0.8040| 0.4153| 0.8825|
*(We run a [slightly modified llm-jp-eval](https://github.com/llm-jp/llm-jp-eval/compare/main...AUGMXNT:llm-jp-eval:main) to support testing of Qwen and to emit a `bos_token` if available)*
For our final model, since it's customary to include benchmarks, we've used Stability AI Japan's [Japanese MT-Bench](https://github.com/Stability-AI/FastChat) as a more representative test of our model's capabilities. For [our JA MT-Bench testing](https://github.com/Stability-AI/FastChat/compare/jp-stable...AUGMXNT:FastChat:jp-stable) we use a Japanese prompt ("あなたは役立つアシスタントです。") as well as `--num-choices 4` in an effort to reduce sampling variability, however we've still observed regular 0.5+ point (and sometimes even greater swings) between generations, as well as issues with default prompts and parameters when testing, so again, we'd urge caution in over-interpreting these scores and treating them as more of a probabilistic directional indicator, rather than a definitive score or ranking:
| Benchmark | Score |
| ----------- | ----- |
| JA MT-Bench | 5.23 |
| MT-Bench | 5.71 |
There is an [MT-Bench Leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard), but as JA MT-Bench is still under development, for convenience, here is a comparison of the JA MT-Bench scores of some other models (our scores were rated by `gpt-4-0613`):
| Model | Score |
| ------------------------------------------------- | ---- |
| gpt-4-0613 | 9.40 |
| gpt-4-1106-preview | 9.17 |
| gpt-3.5-turbo* | 8.41 |
| Qwen-14B-Chat | 7.47 |
| **shisa-7b-v1** | **5.23** |
| ELYZA-japanese-Llama-2-7b-fast-instruct* | 4.86 |
| ja-stablelm-instruct-gamma-7b* | 4.01 |
| japanese-stablelm-instruct-alpha-7b* | 2.74 |
| Mistral-7B-OpenOrca-ja* | 2.23 |
| youri-7b-chat* | 2.00 |
| Mistral-7B-Instruct-v0.1* | 1.78 |
| llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0* | 1.31 |
*(Marked JA MT-Bench results in this section are [sourced from shi3z](https://note.com/shi3zblog/n/n6b2ac5874021))*
## Limitations
Although our model demonstrates a reasonably high level of Japanese fluency, as a 7B parameter model, it is prone to higher hallucination rates and less effective instruction following and reasoning than larger-class models. Also, it still does not have complete mastery of the Japanese language and a native speaker will spot occasional mistakes like some non-idiomatic/awkward phrasing, improper tenses/speech levels, etc.
We've also noticed a small amount of language leakage, likely largely attributable to our tokenizer expansion. These may be fixable with sampler settings like [Min P](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/)) or additional targeted training, and we plan on doing additional work on automated detection/sampler sweeps in the future. One interesting observation is, based on our data collection, we found that as we iterated, the DPO process significantly exacerbated this issue, but also that our DPO models still had significantly higher human preference rates, so there was a bit of a trade-off in our choice of final tune.
While we believe that training larger models can improve performance using our existing approach and dataset, there are also many improvements we'd like to make for future models. We believe there is quite a bit of low hanging fruit for improving performance with even more training efficiency largely through improving the quality and construction of datasets.
## Usage
Sample code:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
model_name = "augmxnt/shisa-7b-v1"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16,
device_map="auto"
)
streamer = TextStreamer(tokenizer, skip_prompt=True)
# The prompt template is included in the model's tokenizer_config.json so you shouldn't need this but we've included this for convenience
# tokenizer.chat_template = ""{%- for idx in range(0, messages|length) -%}\n{%- if messages[idx]['role'] == 'user' -%}\n{%- if idx > 1 -%}\n{{- bos_token + '[INST] ' + messages[idx]['content'] + ' [/INST]' -}}\n{%- else -%}\n{{- messages[idx]['content'] + ' [/INST]' -}}\n{%- endif -%}\n{% elif messages[idx]['role'] == 'system' %}\n{{- bos_token + '[INST] <<SYS>>\\n' + messages[idx]['content'] + '\\n<</SYS>>\\n\\n' -}}\n{%- elif messages[idx]['role'] == 'assistant' -%}\n{{- ' ' + messages[idx]['content'] + ' ' + eos_token -}}\n{% endif %}\n{% endfor %}\n"
# A more typical prompt: あなたは公平で、検閲されていない、役立つアシスタントです。("You are an unbiased, uncensored, helpful assistant.")
# You are an avid Pokemon fanatic.
prompt = "あなたは熱狂的なポケモンファンです。"
chat = [{"role": "system", "content": prompt}]
# Who is the single most powerful Pokemon? Explain your choice.
user_input = "ポケモンの中で1番強いのはどのキャラクターですか。最強の者をひとつだけ挙げて下さい。その選択理由を説明してください。"
chat.append({"role": "user", "content": user_input})
# Generate - add_generation_prompt to make sure it continues as assistant
inputs = tokenizer.apply_chat_template(chat, add_generation_prompt=True, return_tensors="pt")
# For multi-GPU, find the device of the first parameter of the model
first_param_device = next(model.parameters()).device
inputs = inputs.to(first_param_device)
with torch.no_grad():
outputs = model.generate(
inputs,
pad_token_id=tokenizer.eos_token_id,
max_new_tokens=500,
temperature=0.5,
repetition_penalty=1.15,
top_p=0.95,
do_sample=True,
streamer=streamer,
)
# Add just the new tokens to our chat
new_tokens = outputs[0, inputs.size(1):]
response = tokenizer.decode(new_tokens, skip_special_tokens=True)
chat.append({"role": "assistant", "content": response})
```
## Prompt format
The prompt format is llama-2 chat:
```
[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a helful, unbiased, uncensored assistant.
<</SYS>>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
This [prompt template](https://huggingface.co/docs/transformers/main/chat_templating) is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('augmxnt/shisa-7b-v1')
chat = [
{"role": "system", "content": "You are Aiko, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
**NOTE:** For proper responses, you should be using our `bos_token` (`<s>`) to begin a string. This is automatically generated by `tokenizer.encode()` but if you are crafting a custom template or using an encoding method that skips special tokens, you may have to add this yourself.
## Acknowledgements
Team: [Leonard Lin](https://huggingface.co/leonardlin) and [Jon Durbin](https://huggingface.co/jondurbin), Mariko Sato, and Florian von Bock
Compute for this model was generously sponsored by [AKA Virtual](https://akavirtual.com/) (Tokyo, Japan).
Thanks to the [LLM-jp](https://llm-jp.nii.ac.jp/), [Stability AI Japan](https://ja.stability.ai/), and [LMSYS](https://lmsys.org/) teams for their work on llm-jp-eval, Japanese MT-Bench, MT-Bench.
Also, thanks to all the volunteers that provided invaluable human preference testing!
We are actively looking for additional compute as we train better and larger models for this project. Please drop us a line at: *compute at augmxnt dot com*
---
*(GPT-4によって非常に軽微な編集を加えて翻訳されました)*
# シーサー7B
**シーサー7B**(`shisa-7b-v1`)は、合成データ駆動のアプローチを用いて、優れた日本語と英語能力を両立することを目指すバイリンガル(日本語/英語)汎用チャットモデルです。
このモデルは、[Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1)を基に、Mistralのオリジナルのトークナイザーよりも日本語において2倍以上効率的な、日本語最適化拡張トークナイザーをカスタムして作成されました。ベースモデルは、主に日本語のトークンを追加で80億ものトレーニングを行いました。そして、その後、[airoboros-3.1](https://huggingface.co/datasets/jondurbin/airoboros-3.1)の拡張された機械翻訳版、[ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)からの最高得点項目のセット、そして新たに生成された[airoboros](https://github.com/jondurbin/airoboros)のデータを直接目標言語で微調整しています。
商用を含むあらゆる目的で使用可能な寛容なApache 2.0ライセンスの下で、ベースモデル、データセット、およびパイプラインコードも公開しています:
* [shisa-base-7b-v1](https://huggingface.co/augmxnt/shisa-base-7b-v1) - 拡張トークナイザーと追加の日本語プレトレーニングを備えた当方のベースモデル
* [shisa-pretrain-en-ja-v1](https://huggingface.co/datasets/augmxnt/shisa-pretrain-en-ja-v1) - 当方のプレトレーニングデータセット
* [ultra-orca-boros-en-ja](https://huggingface.co/datasets/jondurbin/ultra-orca-boros-en-ja) - 合成生成、機械翻訳、プログラムによる検証によるJA/EN微調整データセット
* [shisa-en-ja-dpo-v1](https://huggingface.co/datasets/augmxnt/shisa-en-ja-dpo-v1) - ultrafeedbackからのDPOペアの小さなサブセットと、選択された値としてGPT-4生成項目を使用した日本語のDPOペア、そして初期の7ビリオンモデルの出力を却下した値
* [シーサーリポジトリ](https://github.com/AUGMXNT/shisa) - 翻訳、データセットの生成、トレーニング、評価コードなどが含まれています
さらに、アブレーション結果、テスト方法論、主要な調査結果など、プロセスの詳細や拡張ライトアップを公開する過程にあります。これは[当プロジェクトwiki](https://github.com/AUGMXNT/shisa/wiki)で研究者に興味深い情報として提供されています。
## 微調整
最初の直感は、最良の[既存の公開日本語トレーニングセット](https://github.com/AUGMXNT/shisa/wiki/A-Review-of-Public-Japanese-Training-Sets)を使用して、それらを組み入れることでより強力な日本語モデルを作成できるかどうかを見ることでした。しかし、初期の検討とテストの後、自らの合成データセットの翻訳/生成にだけ焦点を当てることで、短期間のトレーニングで優れた結果を得ることができると結論付けました。
私たちは複数の翻訳ツールを比較し、手動でレビューを行った結果、`gpt-4`がほぼ常に最高品質の翻訳を提供しながら、Googleの `text-bison-32k`は品質、コスト、スループットのバランスが良いと判断しました。複数の繰り返しを経て、無効な翻訳のフラグ付けとフィルタリング、必要に応じた再翻訳とバックフィルのための追加のアルゴリズムを含むように、翻訳アプローチを洗練させました。
また、このプロジェクトを[NEFTune](https://arxiv.org/abs/2310.05914)と[DPO](https://arxiv.org/abs/2305.18290)トレーニングを取り入れるなど、新しい技術を適用する機会ともなりました。
v1リリースのために、私たちは大量の人間の嗜好テスト(数千の生成と複数ラウンドのペアワイズ比較)に基づいてリリース候補から選択しました。私たちは、勝ち/負け/引き分けと、[BTLモデル](https://datascience.oneoffcoder.com/btl-model.html)(iLSR)を使用して[choix](https://github.com/lucasmaystre/choix)で結果を分析しました。
最良の候補モデルは、3ステップのプロセスで微調整されました:
1. 最初に、モデルは`ultra-orca-boros-en-ja`とSlimOrca ([WandB Log](https://wandb.ai/jondurbin/shisa-7b-v1/runs/k8pfog9d/overview))で微調整されました。
2. 次に、日本語のパフォーマンスを向上させるためにultra-orca-boros-en-jaの一部を使用して1回追加のエポックを追加しました(最初の段階のSlimOrcaは主に英語)([WandB Log](https://wandb.ai/jondurbin/shisa-mega-7b-v1.1/runs/dopsr0o7/overview))。
3. 最後に、モデルは小規模のultrafeedback(英語)と自身のJA DPOデータセットに対してDPOTrainerを使用して調整されました。ここで使用したJA DPOデータセットはgpt-4の出力を選出された値とし、ステージ1の予備モデルの出力を却下した値とします。([WandDB Log](https://wandb.ai/jondurbin/shisa-mega-dpo-7b-v1.1) )
私たちのトレーニングプロセス中に、何故一部の既存の日本語モデルが、追加の日本語トレーニングがないモデルに対してもパフォーマンスが低いのか、といういくつかの重要な洞察を得ることができました。この分析結果を共有すれば、他のチームが日本語モデルを開発する際の参考になると思います。
さらに探求する必要はありますが、実験的な検証として、微調整セットのバージョンを既存のベースモデル("Gamma 7B")に適用し、初期のJA MT-Bench結果が示すように、私たちのチューニングアプローチで機能性のパフォーマンスを劇的に向上させることができました:
| モデル | スコア |
| ------------------------------ | ----- |
| shisa-gamma-7b-allsources-v0.4 | 5.65 |
| ja-stablelm-instruct-gamma-7b* | 4.01 |
## パフォーマンス
トレーニング全体を通じて、各モデルについて人間による評価を行い、モデルのパフォーマンスを相互に検証しました。現在、モデル間の手動での比較テストを大規模に行っています。私たちの目指すところは、ツールをさらに発展させることでこのデータ収集を公開して拡張することです。詳細と更新情報については、[プロジェクトwiki](https://github.com/AUGMXNT/shisa/wiki) をご覧ください。
我々は、[llm-jp-eval](https://github.com/llm-jp/llm-jp-eval)は、私たちの[基本モデル](https://huggingface.co/augmxnt/shisa-base-7b-v1)の有用な指標であり、初期の検証のための微調整プロセス中に非常に役立つと考えていますが、微調整トレーニングにはベンチマークのトレイン分割の一部が含まれているため、私たちが提供するllm-jp-evalの結果は主に興味深いポイントとして提供しています:
| AVR | MC | NLI | QA | RC |
|-------|-------|-------|-------|-------|
| 0.7480| 0.8900| 0.8040| 0.4153| 0.8825|
*(Qwenのテストをサポートし、可能であれば`bos_token`を発行するために、[わずかに修正したllm-jp-eval](https://github.com/llm-jp/llm-jp-eval/compare/main...AUGMXNT:llm-jp-eval:main) を実行しています)*
最終モデルについては、ベンチマークを含めるのが一般的なため、私たちのモデルの能力をより代表的にテストするために、Stability AI Japanの[Japanese MT-Bench](https://github.com/Stability-AI/FastChat)を使用しました。[私たちのJA MT-Bench テスト](https://github.com/Stability-AI/FastChat/compare/jp-stable...AUGMXNT:FastChat:jp-stable)では、サンプリング変動を減らすために、日本語のプロンプト("あなたは役立つアシスタントです。")と `--num-choices 4`を使用していますが、生成間で0.5+点(時にはそれ以上の変動)を頻繁に観察し、テスト時のデフォルトのプロンプトとパラメータに問題があったという経験から、これらのスコアを過度に解釈することには注意が必要で、これらを確定的なスコアやランキングではなく、より確率的な方向指標として扱うことをお勧めします:
| ベンチマーク | スコア |
| ----------- | ----- |
| JA MT-Bench | 5.23 |
| MT-Bench | 5.71 |
[MT-Bench Leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)がありますが、JA MT-Benchはまだ開発中であるため、便宜上、他のモデルのJA MT-Benchスコアとの比較を示します(私たちのスコアは`gpt-4-0613`によって評価されました):
| モデル | スコア |
| ------------------------------------------------- | ---- |
| gpt-4-0613 | 9.40 |
| gpt-4-1106-preview | 9.17 |
| gpt-3.5-turbo* | 8.41 |
| Qwen-14B-Chat | 7.47 |
| **shisa-7b-v1** | **5.23** |
| ELYZA-japanese-Llama-2-7b-fast-instruct* | 4.86 |
| ja-stablelm-instruct-gamma-7b* | 4.01 |
| japanese-stablelm-instruct-alpha-7b* | 2.74 |
| Mistral-7B-OpenOrca-ja* | 2.23 |
| youri-7b-chat* | 2.00 |
| Mistral-7B-Instruct-v0.1* | 1.78 |
| llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0* | 1.31 |
*(このセクションでマークされたJA MT-Benchの結果は[shi3zから引用](https://note.com/shi3zblog/n/n6b2ac5874021)しました)*
## 制限事項
当モデルは十分な日本語の流暢さを示していますが、7Bパラメータのモデルとしては、より大きなクラスのモデルに比べて幻覚率が高く、指示の追跡や推論が効果的でない傾向があります。また、日本語の完全な習得はまだ達しておらず、ネイティブスピーカーはたまに非慣用的/違和感のある表現や不適切な時制/話し言葉のレベルなどの間違いを見つけることがあります。
また、私たちのトークナイザーの拡張に大いに起因する可能性が高いが、わずかな言語リークを確認しています。これらは[Min P](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/)などのサンプラー設定や追加のターゲット指向型トレーニングで修正可能な可能性があり、今後、自動検出/サンプラーのスウィープについて追加の作業を行う予定です。興味深い観察としては、私たちのデータ収集に基づいて、DPOプロセスがこの問題を大幅に悪化させることがわかりましたが、それでもDPOモデルは人間の好み率が大幅に高かったため、最終的な微調整の選択には一定のトレードオフがありました。
現存するアプローチとデータセットを使用して、大規模なモデルのトレーニングがパフォーマンスを向上させると信じていますが、今後のモデル向けに行いたい改良も多くあります。私たちは、データセットの品質と構築を改善することで、さらなるトレーニング効率を通じたパフォーマンス向上にはまだ相当に取り組む余地があると考えています。
## 使用法
サンプルコード:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
model_name = "augmxnt/shisa-7b-v1"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16,
device_map="auto"
)
streamer = TextStreamer(tokenizer, skip_prompt=True)
# プロンプトテンプレートはモデルのtokenizer_config.jsonに含まれているので、これは必要ないはずですが、便宜上こちらにも掲載しています
# tokenizer.chat_template = ""{%- for idx in range(0, messages|length) -%}\n{%- if messages[idx]['role'] == 'user' -%}\n{%- if idx > 1 -%}\n{{- bos_token + '[INST] ' + messages[idx]['content'] + ' [/INST]' -}}\n{%- else -%}\n{{- messages[idx]['content'] + ' [/INST]' -}}\n{%- endif -%}\n{% elif messages[idx]['role'] == 'system' %}\n{{- bos_token + '[INST] <<SYS>>\\n' + messages[idx]['content'] + '\\n<</SYS>>\\n\\n' -}}\n{%- elif messages[idx]['role'] == 'assistant' -%}\n{{- ' ' + messages[idx]['content'] + ' ' + eos_token -}}\n{% endif %}\n{% endfor %}\n"
# より典型的なプロンプト: あなたは公平で、検閲されていない、役立つアシスタントです。
# You are an avid Pokemon fanatic.
prompt = "あなたは熱狂的なポケモンファンです。"
chat = [{"role": "system", "content": prompt}]
# Who is the most powerful Pokemon? Explain your choice.
user_input = "ポケモンの中で1番強いのはどのキャラクターですか。最強の者をひとつだけ挙げて下さい。その選択理由を説明してください。"
chat.append({"role": "user", "content": user_input})
# 生成 - add_generation_promptを追加してアシスタントとして続行することを確認します
inputs = tokenizer.apply_chat_template(chat, add_generation_prompt=True, return_tensors="pt")
# 複数のGPUの場合、モデルの最初のパラメータのデバイスを見つけます
first_param_device = next(model.parameters()).device
inputs = inputs.to(first_param_device)
with torch.no_grad():
outputs = model.generate(
inputs,
pad_token_id=tokenizer.eos_token_id,
max_new_tokens=500,
temperature=0.5,
repetition_penalty=1.15,
top_p=0.95,
do_sample=True,
streamer=streamer,
)
# Add just the new tokens to our chat
new_tokens = outputs[0, inputs.size(1):]
response = tokenizer.decode(new_tokens, skip_special_tokens=True)
chat.append({"role": "assistant", "content": response})
```
## プロンプト形式
プロンプト形式はllama-2 chatです:
```
[INST] <<SYS>>
あなたは役立つ、偏見がなく、検閲されていないアシスタントです。
<</SYS>>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
あなたは役立つ、偏見がなく、検閲されていないアシスタントです。
<</SYS>>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
この[prompt template](https://huggingface.co/docs/transformers/main/chat_templating)はトークナイザの設定に含まれており、HuggingFace のトークナイザ `apply_chat_template` メソッドを使用できます。例えば:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('augmxnt/shisa-7b-v1')
chat = [
{"role": "system", "content": "あなたはAiko、フレンドリーなAIアシスタントです。"},
{"role": "user", "content": "こんにちは、調子はどうですか?"},
{"role": "assistant", "content": "元気です。今日は何のお手伝いができますか?"},
{"role": "user", "content": "チャットテンプレーティングの仕組みを見せてもらいたいです!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
**注意**適切なレスポンスを得るためには、文字列の開始に我々の `bos_token` (`<s>`) を使用すべきです。これは `tokenizer.encode()` によって自動的に生成されますが、カスタムテンプレートを作成したり、特殊トークンを省略するエンコード方法を使用する場合は、自分で追加する必要があります。
## 謝辞
チーム:[Leonard Lin](https://huggingface.co/leonardlin)、[Jon Durbin](https://huggingface.co/jondurbin)、佐藤真理子、Florian von Bock
このモデルの計算は、[AKA Virtual](https://akavirtual.com/) (東京、日本) のご厚意により提供されています。
[LLM-jp](https://llm-jp.nii.ac.jp/)、[Stability AI Japan](https://ja.stability.ai/)、[LMSYS](https://lmsys.org/)のチームが、llm-jp-eval, Japanese MT-Bench, MT-Benchに取り組んでくれて感謝しています。
また、貴重なヒューマンプリファレンステストを提供してくださったすべてのボランティアにも感謝いたします!
このプロジェクトのためにより良く、より大きなモデルを訓練するために、追加の計算を積極的に探しています。お問い合わせは次の宛先までお願いいたします:*compute at augmxnt dot com*
|
hkivancoral/smids_10x_deit_base_adamax_00001_fold1
|
hkivancoral
| 2023-12-20T18:10:37Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-20T15:35:49Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_base_adamax_00001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9248747913188647
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_base_adamax_00001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5153
- Accuracy: 0.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2391 | 1.0 | 751 | 0.2691 | 0.8881 |
| 0.1261 | 2.0 | 1502 | 0.2399 | 0.9082 |
| 0.1129 | 3.0 | 2253 | 0.2368 | 0.9165 |
| 0.0547 | 4.0 | 3004 | 0.2399 | 0.9215 |
| 0.0473 | 5.0 | 3755 | 0.3024 | 0.9165 |
| 0.0084 | 6.0 | 4506 | 0.3345 | 0.9182 |
| 0.001 | 7.0 | 5257 | 0.4000 | 0.9215 |
| 0.0092 | 8.0 | 6008 | 0.4032 | 0.9182 |
| 0.0007 | 9.0 | 6759 | 0.4106 | 0.9249 |
| 0.0001 | 10.0 | 7510 | 0.4482 | 0.9182 |
| 0.0001 | 11.0 | 8261 | 0.4776 | 0.9182 |
| 0.0 | 12.0 | 9012 | 0.4461 | 0.9215 |
| 0.0002 | 13.0 | 9763 | 0.4646 | 0.9199 |
| 0.0 | 14.0 | 10514 | 0.4721 | 0.9199 |
| 0.0 | 15.0 | 11265 | 0.4754 | 0.9232 |
| 0.0 | 16.0 | 12016 | 0.4752 | 0.9282 |
| 0.0 | 17.0 | 12767 | 0.4772 | 0.9265 |
| 0.0 | 18.0 | 13518 | 0.4906 | 0.9215 |
| 0.0 | 19.0 | 14269 | 0.4791 | 0.9182 |
| 0.0 | 20.0 | 15020 | 0.4897 | 0.9215 |
| 0.0 | 21.0 | 15771 | 0.5412 | 0.9132 |
| 0.0 | 22.0 | 16522 | 0.5200 | 0.9265 |
| 0.0 | 23.0 | 17273 | 0.4930 | 0.9249 |
| 0.0 | 24.0 | 18024 | 0.5327 | 0.9165 |
| 0.0 | 25.0 | 18775 | 0.4977 | 0.9182 |
| 0.0 | 26.0 | 19526 | 0.5032 | 0.9215 |
| 0.0 | 27.0 | 20277 | 0.5327 | 0.9165 |
| 0.0 | 28.0 | 21028 | 0.5170 | 0.9232 |
| 0.0022 | 29.0 | 21779 | 0.5055 | 0.9249 |
| 0.0 | 30.0 | 22530 | 0.4999 | 0.9232 |
| 0.0 | 31.0 | 23281 | 0.5556 | 0.9149 |
| 0.0 | 32.0 | 24032 | 0.5049 | 0.9249 |
| 0.0 | 33.0 | 24783 | 0.5110 | 0.9232 |
| 0.0 | 34.0 | 25534 | 0.5596 | 0.9115 |
| 0.0 | 35.0 | 26285 | 0.5071 | 0.9265 |
| 0.0 | 36.0 | 27036 | 0.5052 | 0.9249 |
| 0.0 | 37.0 | 27787 | 0.5090 | 0.9249 |
| 0.0 | 38.0 | 28538 | 0.5107 | 0.9249 |
| 0.0 | 39.0 | 29289 | 0.5094 | 0.9249 |
| 0.0 | 40.0 | 30040 | 0.5107 | 0.9249 |
| 0.0 | 41.0 | 30791 | 0.5100 | 0.9249 |
| 0.0 | 42.0 | 31542 | 0.5114 | 0.9249 |
| 0.0 | 43.0 | 32293 | 0.5123 | 0.9249 |
| 0.0 | 44.0 | 33044 | 0.5134 | 0.9249 |
| 0.0 | 45.0 | 33795 | 0.5146 | 0.9249 |
| 0.0 | 46.0 | 34546 | 0.5165 | 0.9249 |
| 0.0 | 47.0 | 35297 | 0.5154 | 0.9249 |
| 0.0 | 48.0 | 36048 | 0.5153 | 0.9249 |
| 0.0 | 49.0 | 36799 | 0.5157 | 0.9249 |
| 0.0 | 50.0 | 37550 | 0.5153 | 0.9249 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
rootstrap-org/wordle-solver
|
rootstrap-org
| 2023-12-20T17:54:58Z | 0 | 0 | null |
[
"en",
"license:mit",
"region:us"
] | null | 2023-12-20T17:51:02Z |
---
license: mit
language:
- en
---
## Wordle solver model
This model was trained using Reinforcement Learning with the A3C algorithm implemented with pytorch.
It has reached 95% of accuracy.
For more information go to the source repo of the project: https://github.com/rootstrap/wordle-solver
|
omarelsayeed/QA_Search_E5
|
omarelsayeed
| 2023-12-20T17:45:44Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-12-20T16:15:02Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 788 with parameters:
```
{'batch_size': 256, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`__main__.LoggingCosineSimLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 0.0005
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 200,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 150, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
serkanarslan/mistral-7b-mini-ft
|
serkanarslan
| 2023-12-20T17:38:50Z | 2 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2023-12-19T19:41:38Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: True
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0
|
justinwangx/vicuna-robust3-sft-lora
|
justinwangx
| 2023-12-20T17:38:15Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"region:us"
] | null | 2023-12-20T17:36:48Z |
---
tags:
- generated_from_trainer
model-index:
- name: vicuna-robust3-sft-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vicuna-robust3-sft-lora
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 128
- total_train_batch_size: 2048
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0 | 0 | 1.8516 |
| No log | 0 | 0 | 1.8678 |
| No log | 0 | 0 | 1.9414 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0a0+32f93b1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Fearnworks/fearnstral-instruct-fn-v0.2
|
Fearnworks
| 2023-12-20T17:36:09Z | 10 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetuned",
"conversational",
"dataset:Trelis/function_calling_v3",
"arxiv:2310.06825",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-12-20T17:23:18Z |
---
license: apache-2.0
pipeline_tag: text-generation
tags:
- finetuned
inference: false
datasets:
- Trelis/function_calling_v3
---
# Model Card for fearnstral-instruct-fn-v0.2
Fearnstral Instruct is a function calling fine tune of the mistral-7b-instruct model trained with the function_calling_v3 dataset.
# Original Mistral-7B-Instruct-v0.2 model card
The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an improved instruct fine-tuned version of [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1).
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/la-plateforme/).
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Troubleshooting
- If you see the following error:
```
Traceback (most recent call last):
File "", line 1, in
File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
raise KeyError(key)
KeyError: 'mistral'
```
Installing transformers from source should solve the issue
pip install git+https://github.com/huggingface/transformers
This should not be required after transformers-v4.33.4.
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
chathuru/cicids2018-falcon7b
|
chathuru
| 2023-12-20T17:34:50Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"falcon",
"text-generation",
"generated_from_trainer",
"custom_code",
"base_model:tiiuae/falcon-7b",
"base_model:finetune:tiiuae/falcon-7b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-20T17:22:37Z |
---
license: apache-2.0
base_model: tiiuae/falcon-7b
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: cicids2018-falcon7b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cicids2018-falcon7b
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2465
- Accuracy: 0.5
- F1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|
| 0.7359 | 1.0 | 24 | 1.3954 | 0.5 | 0.0 |
| 1.3324 | 2.0 | 48 | 1.3244 | 0.5 | 0.0 |
| 1.8502 | 3.0 | 72 | 1.2808 | 0.5 | 0.0 |
| 0.0832 | 4.0 | 96 | 1.2465 | 0.5 | 0.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
G-ML-Hyly/stg-cli13b-t6-cdp-ca.mt.him.cln.inter-b4s1e1-20231220-1052
|
G-ML-Hyly
| 2023-12-20T17:30:50Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:codellama/CodeLlama-13b-Instruct-hf",
"base_model:adapter:codellama/CodeLlama-13b-Instruct-hf",
"license:llama2",
"region:us"
] | null | 2023-12-20T15:53:06Z |
---
license: llama2
library_name: peft
tags:
- generated_from_trainer
base_model: codellama/CodeLlama-13b-Instruct-hf
model-index:
- name: stg-cli13b-t6-cdp-ca.mt.him.cln.inter-b4s1e1-20231220-1052
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stg-cli13b-t6-cdp-ca.mt.him.cln.inter-b4s1e1-20231220-1052
This model is a fine-tuned version of [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3435 | 0.03 | 100 | 0.0703 |
| 0.0654 | 0.07 | 200 | 0.0586 |
| 0.0579 | 0.1 | 300 | 0.0563 |
| 0.0567 | 0.14 | 400 | 0.0562 |
| 0.0551 | 0.17 | 500 | 0.0547 |
| 0.0547 | 0.21 | 600 | 0.0526 |
| 0.0532 | 0.24 | 700 | 0.0516 |
| 0.0534 | 0.28 | 800 | 0.0515 |
| 0.0521 | 0.31 | 900 | 0.0520 |
| 0.0522 | 0.35 | 1000 | 0.0517 |
| 0.0518 | 0.38 | 1100 | 0.0511 |
| 0.051 | 0.42 | 1200 | 0.0502 |
| 0.0517 | 0.45 | 1300 | 0.0494 |
| 0.0506 | 0.49 | 1400 | 0.0499 |
| 0.0511 | 0.52 | 1500 | 0.0496 |
| 0.05 | 0.56 | 1600 | 0.0493 |
| 0.05 | 0.59 | 1700 | 0.0497 |
| 0.049 | 0.63 | 1800 | 0.0485 |
| 0.0487 | 0.66 | 1900 | 0.0484 |
| 0.0492 | 0.7 | 2000 | 0.0483 |
| 0.0493 | 0.73 | 2100 | 0.0481 |
| 0.0483 | 0.77 | 2200 | 0.0478 |
| 0.048 | 0.8 | 2300 | 0.0478 |
| 0.048 | 0.83 | 2400 | 0.0476 |
| 0.0476 | 0.87 | 2500 | 0.0474 |
| 0.0471 | 0.9 | 2600 | 0.0473 |
| 0.0472 | 0.94 | 2700 | 0.0472 |
| 0.0469 | 0.97 | 2800 | 0.0472 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
am-infoweb/rap_phase2_20dec_15i_v2.csv
|
am-infoweb
| 2023-12-20T17:21:49Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-12-20T17:17:32Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: rap_phase2_20dec_15i_v2.csv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rap_phase2_20dec_15i_v2.csv
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 177 | 0.9350 |
| No log | 2.0 | 354 | 0.7804 |
| 1.4754 | 3.0 | 531 | 0.0371 |
| 1.4754 | 4.0 | 708 | 0.0597 |
| 1.4754 | 5.0 | 885 | 0.0595 |
| 0.0707 | 6.0 | 1062 | 0.0047 |
| 0.0707 | 7.0 | 1239 | 0.0059 |
| 0.0707 | 8.0 | 1416 | 0.0401 |
| 0.0202 | 9.0 | 1593 | 0.0488 |
| 0.0202 | 10.0 | 1770 | 0.0592 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
akmoyu/whisper-small-mn
|
akmoyu
| 2023-12-20T17:19:49Z | 17 | 2 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"mn",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-12T13:47:29Z |
---
language:
- mn
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
base_model: openai/whisper-small
model-index:
- name: Whisper Small Mn - akmoyu
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: hi, split: test'
metrics:
- type: wer
value: 50.51878549584971
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Mn - akmoyu
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8308
- Wer: 50.5188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0306 | 7.94 | 1000 | 0.6344 | 52.8724 |
| 0.0017 | 15.87 | 2000 | 0.7480 | 50.3659 |
| 0.0004 | 23.81 | 3000 | 0.8137 | 50.5406 |
| 0.0003 | 15.87 | 4000 | 0.8308 | 50.5188 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.2
|
chathuru/CuATR-falcon7b-v1
|
chathuru
| 2023-12-20T17:17:58Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"falcon",
"text-generation",
"generated_from_trainer",
"custom_code",
"base_model:tiiuae/falcon-7b",
"base_model:finetune:tiiuae/falcon-7b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-04T23:17:22Z |
---
license: apache-2.0
base_model: tiiuae/falcon-7b
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: CuATR-falcon7b-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CuATR-falcon7b-v1
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0246
- Accuracy: 0.5
- F1: 0.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 3.8889 | 0.92 | 3 | 3.0637 | 0.5 | 0.6667 |
| 5.8673 | 1.85 | 6 | 3.0774 | 0.5 | 0.6667 |
| 3.9181 | 2.77 | 9 | 3.0334 | 0.5 | 0.6667 |
| 3.8567 | 4.0 | 13 | 3.0286 | 0.5 | 0.6667 |
| 1.9671 | 4.92 | 16 | 3.0071 | 0.5 | 0.6667 |
| 1.9456 | 5.85 | 19 | 3.0315 | 0.5 | 0.6667 |
| 5.8213 | 6.46 | 21 | 3.0246 | 0.5 | 0.6667 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
jan-hq/stealth-v1
|
jan-hq
| 2023-12-20T17:11:56Z | 13 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-20T13:21:28Z |
---
license: apache-2.0
language:
- en
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a
>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Prompt template
```
{system_message}
### Instruction:
{prompt}
### Response:
```
# Run this model
You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux.
Jan is an open source, ChatGPT alternative that is:
- 💻 **100% offline on your machine**: Your conversations remain confidential, and visible only to you.
- 🗂️ **An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
- 🌐 **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints
- 🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq)

# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
|
eliolio/bart-finetuned-yelpreviews
|
eliolio
| 2023-12-20T17:07:10Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-18T16:33:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: facebook/bart-base
model-index:
- name: bart-finetuned-yelpreviews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-finetuned-yelpreviews
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8423 | 0.27 | 1000 | 3.5093 |
| 3.7475 | 0.55 | 2000 | 3.4619 |
| 3.7098 | 0.82 | 3000 | 3.4368 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 1.18.4
- Tokenizers 0.12.1
|
am-infoweb/rap_phase2_20dec_15i_v1.csv
|
am-infoweb
| 2023-12-20T17:03:51Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-12-20T16:59:50Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: rap_phase2_20dec_15i_v1.csv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rap_phase2_20dec_15i_v1.csv
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 169 | 0.8805 |
| No log | 2.0 | 338 | 0.7918 |
| 1.5516 | 3.0 | 507 | 0.7187 |
| 1.5516 | 4.0 | 676 | 0.2477 |
| 1.5516 | 5.0 | 845 | 0.0310 |
| 0.2826 | 6.0 | 1014 | 0.0003 |
| 0.2826 | 7.0 | 1183 | 0.0002 |
| 0.2826 | 8.0 | 1352 | 0.0001 |
| 0.012 | 9.0 | 1521 | 0.0114 |
| 0.012 | 10.0 | 1690 | 0.0001 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
patpizio/xlmr-ne-en-all_shuffled-764-test1000
|
patpizio
| 2023-12-20T16:51:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-20T16:47:15Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlmr-ne-en-all_shuffled-764-test1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr-ne-en-all_shuffled-764-test1000
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6598
- R Squared: 0.2676
- Mae: 0.6318
- Pearson R: 0.6380
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 764
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | R Squared | Mae | Pearson R |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---------:|
| No log | 1.0 | 438 | 0.6430 | 0.2862 | 0.6416 | 0.5407 |
| 0.7301 | 2.0 | 876 | 0.6202 | 0.3116 | 0.6302 | 0.6325 |
| 0.5037 | 3.0 | 1314 | 0.6598 | 0.2676 | 0.6318 | 0.6380 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.6
- Tokenizers 0.14.1
|
exontidev/SISUS_ADAPTERS_SIKERS_TEST
|
exontidev
| 2023-12-20T16:44:40Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:ai-forever/rugpt3large_based_on_gpt2",
"base_model:adapter:ai-forever/rugpt3large_based_on_gpt2",
"region:us"
] | null | 2023-12-20T16:44:38Z |
---
library_name: peft
base_model: ai-forever/rugpt3large_based_on_gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
originstory/holisleigh
|
originstory
| 2023-12-20T16:40:55Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"musicgen",
"text-to-audio",
"arxiv:2306.05284",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-12-20T15:57:29Z |
---
inference: true
tags:
- musicgen
license: cc-by-nc-4.0
---
# MusicGen - Large - 3.3B
MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts.
It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass.
By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio.
MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*.
Four checkpoints are released:
- [small](https://huggingface.co/facebook/musicgen-small)
- [medium](https://huggingface.co/facebook/musicgen-medium)
- [**large** (this checkpoint)](https://huggingface.co/facebook/musicgen-large)
- [melody](https://huggingface.co/facebook/musicgen-melody)
## Example
Try out MusicGen yourself!
* Audiocraft Colab:
<a target="_blank" href="https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Colab:
<a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Demo:
<a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
## 🤗 Transformers Usage
You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards.
1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) and scipy:
```
pip install --upgrade pip
pip install --upgrade transformers scipy
```
2. Run inference via the `Text-to-Audio` (TTA) pipeline. You can infer the MusicGen model via the TTA pipeline in just a few lines of code!
```python
from transformers import pipeline
import scipy
synthesiser = pipeline("text-to-audio", "facebook/musicgen-large")
music = synthesiser("lo-fi music with a soothing melody", forward_params={"do_sample": True})
scipy.io.wavfile.write("musicgen_out.wav", rate=music["sampling_rate"], data=music["audio"])
```
3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 32 kHz audio waveform for more fine-grained control.
```python
from transformers import AutoProcessor, MusicgenForConditionalGeneration
processor = AutoProcessor.from_pretrained("facebook/musicgen-large")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-large")
inputs = processor(
text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
padding=True,
return_tensors="pt",
)
audio_values = model.generate(**inputs, max_new_tokens=256)
```
4. Listen to the audio samples either in an ipynb notebook:
```python
from IPython.display import Audio
sampling_rate = model.config.audio_encoder.sampling_rate
Audio(audio_values[0].numpy(), rate=sampling_rate)
```
Or save them as a `.wav` file using a third-party library, e.g. `scipy`:
```python
import scipy
sampling_rate = model.config.audio_encoder.sampling_rate
scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy())
```
For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen).
## Audiocraft Usage
You can also run MusicGen locally through the original [Audiocraft library]((https://github.com/facebookresearch/audiocraft):
1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft)
```
pip install git+https://github.com/facebookresearch/audiocraft.git
```
2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
```
apt get install ffmpeg
```
3. Run the following Python code:
```py
from audiocraft.models import MusicGen
from audiocraft.data.audio import audio_write
model = MusicGen.get_pretrained("large")
model.set_generation_params(duration=8) # generate 8 seconds.
descriptions = ["happy rock", "energetic EDM"]
wav = model.generate(descriptions) # generates 2 samples.
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness")
```
## Model details
**Organization developing the model:** The FAIR team of Meta AI.
**Model date:** MusicGen was trained between April 2023 and May 2023.
**Model version:** This is the version 1 of the model.
**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation.
**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284).
**Citation details:**
```
@misc{copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
year={2023},
eprint={2306.05284},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
**License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
## Intended use
**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including:
- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs
**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
**Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
## Metrics
**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
- Overall quality of the music samples;
- Text relevance to the provided text input;
- Adherence to the melody for melody-guided music generation.
More details on performance measures and human studies can be found in the paper.
**Decision thresholds:** Not applicable.
## Evaluation datasets
The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
## Training datasets
The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
## Evaluation results
Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we had all the datasets go through a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs), in order to keep only the instrumental part. This explains the difference in objective metrics with the models used in the paper.
| Model | Frechet Audio Distance | KLD | Text Consistency | Chroma Cosine Similarity |
|---|---|---|---|---|
| facebook/musicgen-small | 4.88 | 1.42 | 0.27 | - |
| facebook/musicgen-medium | 5.14 | 1.38 | 0.28 | - |
| **facebook/musicgen-large** | 5.48 | 1.37 | 0.28 | - |
| facebook/musicgen-melody | 4.93 | 1.41 | 0.27 | 0.44 |
More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284), in the Results section.
## Limitations and biases
**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs).
**Limitations:**
- The model is not able to generate realistic vocals.
- The model has been trained with English descriptions and will not perform as well in other languages.
- The model does not perform equally well for all music styles and cultures.
- The model sometimes generates end of songs, collapsing to silence.
- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
|
mazzaqq/roberta_finetuned_astronomicalNER
|
mazzaqq
| 2023-12-20T16:37:59Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-08T10:22:52Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: 8Agos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_finetuned_astronomicalNER
This model is a fine-tuned version of [xlm-roberta-large-finetuned-conll03-english](https://huggingface.co/xlm-roberta-large-finetuned-conll03-english) for NER on astronomical objects.
The dataset comes from the Shared Task [DEAL: Detecting Entities in the Astrophysics Literature](https://ui.adsabs.harvard.edu/WIESP/2022/SharedTasks)
The model achieves the following results on the evaluation set:
- Loss: 0.1416
- Precision: 0.7659
- Recall: 0.7986
- F1: 0.7819
- Accuracy: 0.9640
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 176 | 0.1571 | 0.7362 | 0.7788 | 0.7569 | 0.9593 |
| No log | 2.0 | 352 | 0.1416 | 0.7529 | 0.7831 | 0.7677 | 0.9624 |
| 0.1109 | 3.0 | 528 | 0.1416 | 0.7659 | 0.7986 | 0.7819 | 0.9640 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
patpizio/xlmr-ne-en-all_shuffled-1986-test1000
|
patpizio
| 2023-12-20T16:32:49Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-20T16:28:13Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlmr-ne-en-all_shuffled-1986-test1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr-ne-en-all_shuffled-1986-test1000
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6776
- R Squared: 0.2386
- Mae: 0.6419
- Pearson R: 0.6657
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1986
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | R Squared | Mae | Pearson R |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---------:|
| No log | 1.0 | 438 | 0.6668 | 0.2507 | 0.6678 | 0.5661 |
| 0.7313 | 2.0 | 876 | 0.6442 | 0.2761 | 0.6365 | 0.6635 |
| 0.5233 | 3.0 | 1314 | 0.6776 | 0.2386 | 0.6419 | 0.6657 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.6
- Tokenizers 0.14.1
|
azizalto/arabic-news-embeddings
|
azizalto
| 2023-12-20T16:28:15Z | 0 | 1 | null |
[
"arabic-embeddings",
"word-embeddings",
"word2vec",
"language-model",
"ar",
"license:mit",
"region:us"
] | null | 2023-12-18T00:32:27Z |
---
license: mit
language:
- ar
tags:
- arabic-embeddings
- word-embeddings
- word2vec
- language-model
---
The model (embeddings), and dataset (corpus) used in the paper:
```latex
@inproceedings{altowayan2016word,
title={Word embeddings for Arabic sentiment analysis},
author={Altowayan, A. Aziz and Tao, Lixin},
booktitle={Big Data (Big Data), 2016 IEEE International Conference on},
pages={3820--3825},
year={2016},
organization={IEEE}
}
```
> Source repo: https://github.com/iamaziz/ar-embeddings
|
LoneStriker/meow-8.0bpw-h8-exl2
|
LoneStriker
| 2023-12-20T16:27:54Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:adapter:upstage/SOLAR-10.7B-Instruct-v1.0",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-12-20T16:00:42Z |
---
license: cc-by-nc-4.0
library_name: peft
tags:
- generated_from_trainer
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
model-index:
- name: meow
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meow
This model is a fine-tuned version of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) on HuggingFaceH4/no_robots dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 128
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3864 | 0.11 | 2 | 2.3831 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
## Training procedure
### Framework versions
- PEFT 0.6.1
|
LoneStriker/meow-5.0bpw-h6-exl2
|
LoneStriker
| 2023-12-20T16:27:42Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:adapter:upstage/SOLAR-10.7B-Instruct-v1.0",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-12-20T15:45:08Z |
---
license: cc-by-nc-4.0
library_name: peft
tags:
- generated_from_trainer
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
model-index:
- name: meow
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meow
This model is a fine-tuned version of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) on HuggingFaceH4/no_robots dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 128
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3864 | 0.11 | 2 | 2.3831 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
## Training procedure
### Framework versions
- PEFT 0.6.1
|
tresbien1/dqn-SpaceInvadersNoFrameskip-v4
|
tresbien1
| 2023-12-20T16:27:41Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-20T16:27:11Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 470.50 +/- 153.29
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga tresbien1 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga tresbien1 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga tresbien1
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
LoneStriker/meow-4.0bpw-h6-exl2
|
LoneStriker
| 2023-12-20T16:27:36Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:adapter:upstage/SOLAR-10.7B-Instruct-v1.0",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-12-20T15:37:19Z |
---
license: cc-by-nc-4.0
library_name: peft
tags:
- generated_from_trainer
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
model-index:
- name: meow
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meow
This model is a fine-tuned version of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) on HuggingFaceH4/no_robots dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 128
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3864 | 0.11 | 2 | 2.3831 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
## Training procedure
### Framework versions
- PEFT 0.6.1
|
LoneStriker/meow-3.0bpw-h6-exl2
|
LoneStriker
| 2023-12-20T16:27:31Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:adapter:upstage/SOLAR-10.7B-Instruct-v1.0",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-12-20T15:29:32Z |
---
license: cc-by-nc-4.0
library_name: peft
tags:
- generated_from_trainer
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
model-index:
- name: meow
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meow
This model is a fine-tuned version of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) on HuggingFaceH4/no_robots dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 128
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3864 | 0.11 | 2 | 2.3831 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
## Training procedure
### Framework versions
- PEFT 0.6.1
|
onedapperterm/LF6_Service_Classifier
|
onedapperterm
| 2023-12-20T16:21:02Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dbmdz/bert-base-german-cased",
"base_model:finetune:dbmdz/bert-base-german-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-08T05:56:24Z |
---
license: mit
tags:
- generated_from_trainer
base_model: dbmdz/bert-base-german-cased
model-index:
- name: LF6_Service_Classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LF6_Service_Classifier
This model is a fine-tuned version of [dbmdz/bert-base-german-cased](https://huggingface.co/dbmdz/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2568 | 1.0 | 600 | 0.0639 |
| 0.0553 | 2.0 | 1200 | 0.0359 |
| 0.0515 | 3.0 | 1800 | 0.0313 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
korginevvostorge/distilhubert-course-model2-finetuned-gtzan
|
korginevvostorge
| 2023-12-20T16:21:01Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-12-20T13:50:26Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-course-model2-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.87
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-course-model2-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6319
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0479 | 1.0 | 113 | 1.9649 | 0.57 |
| 1.2746 | 2.0 | 226 | 1.3397 | 0.62 |
| 0.9327 | 3.0 | 339 | 0.9767 | 0.72 |
| 0.7575 | 4.0 | 452 | 0.8140 | 0.77 |
| 0.5051 | 5.0 | 565 | 0.6947 | 0.8 |
| 0.4299 | 6.0 | 678 | 0.6564 | 0.8 |
| 0.2753 | 7.0 | 791 | 0.7915 | 0.74 |
| 0.2209 | 8.0 | 904 | 0.5574 | 0.81 |
| 0.2022 | 9.0 | 1017 | 0.6053 | 0.85 |
| 0.0333 | 10.0 | 1130 | 0.5527 | 0.88 |
| 0.1367 | 11.0 | 1243 | 0.5989 | 0.87 |
| 0.0141 | 12.0 | 1356 | 0.6271 | 0.86 |
| 0.0104 | 13.0 | 1469 | 0.6737 | 0.87 |
| 0.0093 | 14.0 | 1582 | 0.6163 | 0.86 |
| 0.0099 | 15.0 | 1695 | 0.6319 | 0.87 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Emmanuelalo52/distilbert-base-uncased-finetuned-clinc
|
Emmanuelalo52
| 2023-12-20T16:20:21Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-18T08:47:23Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9183870967741935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7735
- Accuracy: 0.9184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2619 | 0.7181 |
| 3.77 | 2.0 | 636 | 1.8605 | 0.8361 |
| 3.77 | 3.0 | 954 | 1.1513 | 0.8955 |
| 1.6831 | 4.0 | 1272 | 0.8544 | 0.9139 |
| 0.9001 | 5.0 | 1590 | 0.7735 | 0.9184 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
d4niel92/xlm-roberta-base-finetuned-marc-en
|
d4niel92
| 2023-12-20T16:19:53Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
base_model: xlm-roberta-base
model-index:
- name: xlm-roberta-base-finetuned-marc-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8976
- Mae: 0.4268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.092 | 1.0 | 235 | 0.9514 | 0.5122 |
| 0.9509 | 2.0 | 470 | 0.8976 | 0.4268 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
coldfir3/xlm-roberta-base-finetuned-panx-de-fr
|
coldfir3
| 2023-12-20T16:14:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
base_model: xlm-roberta-base
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1667
- F1: 0.8582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2885 | 1.0 | 715 | 0.1817 | 0.8287 |
| 0.1497 | 2.0 | 1430 | 0.1618 | 0.8442 |
| 0.0944 | 3.0 | 2145 | 0.1667 | 0.8582 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
patpizio/xlmr-et-en-all_shuffled-1985-test1000
|
patpizio
| 2023-12-20T16:12:42Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-20T16:08:08Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlmr-et-en-all_shuffled-1985-test1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr-et-en-all_shuffled-1985-test1000
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5701
- R Squared: 0.3139
- Mae: 0.5603
- Pearson R: 0.6172
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1985
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | R Squared | Mae | Pearson R |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---------:|
| No log | 1.0 | 438 | 0.5588 | 0.3274 | 0.5882 | 0.5760 |
| 0.7153 | 2.0 | 876 | 0.5470 | 0.3416 | 0.5515 | 0.6228 |
| 0.5105 | 3.0 | 1314 | 0.5701 | 0.3139 | 0.5603 | 0.6172 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.6
- Tokenizers 0.14.1
|
TingChen-ppmc/whisper-small-Tianjin
|
TingChen-ppmc
| 2023-12-20T16:09:24Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"ch",
"dataset:TingChen-ppmc/Tianjin_Dialect_Conversational_Speech_Corpus",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-12-04T05:50:31Z |
---
language:
- ch
license: apache-2.0
base_model: openai/whisper-small
datasets:
- TingChen-ppmc/Tianjin_Dialect_Conversational_Speech_Corpus
metrics:
- wer
- cer
---
This is a finetuned version of [Whisper-small](openai/whisper-small) on [Tianjin_Dialect_Conversational_Speech_Corpus](https://huggingface.co/datasets/TingChen-ppmc/Tianjin_Dialect_Conversational_Speech_Corpus)
# Results
CER: 46.17
WER: 67.65
Chinese Word Error Rate (WER) is based on [Jieba](https://github.com/fxsjy/jieba) Word segmentation
|
FounderOfHuggingface/gpt2_gen_lora_r16_ag_news_t200_e5_non_member_shadow19
|
FounderOfHuggingface
| 2023-12-20T16:04:45Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-20T16:04:42Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
emre/wav2vec2-large-xls-r-300m-tr
|
emre
| 2023-12-20T16:04:23Z | 97 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"tr",
"dataset:mozilla-foundation/common_voice_8_0",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: tr
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
base_model: facebook/wav2vec2-xls-r-300m
model-index:
- name: wav2vec2-large-xls-r-300m-tr
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
name: Common Voice tr
type: common_voice_8_0
args: tr
metrics:
- type: wer
value: 28.69
name: Test WER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2224
- Wer: 0.2869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.8222 | 0.64 | 500 | 3.5026 | 1.0 |
| 3.2136 | 1.28 | 1000 | 3.0593 | 1.0000 |
| 2.8882 | 1.91 | 1500 | 2.4670 | 0.9939 |
| 2.3743 | 2.55 | 2000 | 1.1844 | 0.8657 |
| 1.9456 | 3.19 | 2500 | 0.8228 | 0.7397 |
| 1.7781 | 3.83 | 3000 | 0.6826 | 0.6753 |
| 1.6848 | 4.46 | 3500 | 0.5885 | 0.6140 |
| 1.6228 | 5.1 | 4000 | 0.5274 | 0.5789 |
| 1.5768 | 5.74 | 4500 | 0.4900 | 0.5519 |
| 1.5431 | 6.38 | 5000 | 0.4508 | 0.5238 |
| 1.5019 | 7.02 | 5500 | 0.4248 | 0.5021 |
| 1.4684 | 7.65 | 6000 | 0.4009 | 0.4827 |
| 1.4635 | 8.29 | 6500 | 0.3830 | 0.4700 |
| 1.4291 | 8.93 | 7000 | 0.3707 | 0.4595 |
| 1.4271 | 9.57 | 7500 | 0.3570 | 0.4514 |
| 1.3938 | 10.2 | 8000 | 0.3479 | 0.4378 |
| 1.3914 | 10.84 | 8500 | 0.3396 | 0.4368 |
| 1.3767 | 11.48 | 9000 | 0.3253 | 0.4262 |
| 1.3641 | 12.12 | 9500 | 0.3251 | 0.4178 |
| 1.355 | 12.76 | 10000 | 0.3138 | 0.4136 |
| 1.336 | 13.39 | 10500 | 0.3121 | 0.4069 |
| 1.3292 | 14.03 | 11000 | 0.3041 | 0.4014 |
| 1.3249 | 14.67 | 11500 | 0.3014 | 0.3931 |
| 1.3156 | 15.31 | 12000 | 0.3014 | 0.3929 |
| 1.313 | 15.94 | 12500 | 0.2969 | 0.3968 |
| 1.3068 | 16.58 | 13000 | 0.2965 | 0.3966 |
| 1.2785 | 17.22 | 13500 | 0.2943 | 0.3850 |
| 1.2867 | 17.86 | 14000 | 0.2912 | 0.3782 |
| 1.2714 | 18.49 | 14500 | 0.2819 | 0.3747 |
| 1.2844 | 19.13 | 15000 | 0.2840 | 0.3740 |
| 1.2684 | 19.77 | 15500 | 0.2913 | 0.3828 |
| 1.26 | 20.41 | 16000 | 0.2739 | 0.3674 |
| 1.2543 | 21.05 | 16500 | 0.2740 | 0.3691 |
| 1.2532 | 21.68 | 17000 | 0.2709 | 0.3756 |
| 1.2409 | 22.32 | 17500 | 0.2669 | 0.3593 |
| 1.2404 | 22.96 | 18000 | 0.2673 | 0.3576 |
| 1.2347 | 23.6 | 18500 | 0.2678 | 0.3643 |
| 1.2351 | 24.23 | 19000 | 0.2715 | 0.3650 |
| 1.2409 | 24.87 | 19500 | 0.2637 | 0.3571 |
| 1.2152 | 25.51 | 20000 | 0.2785 | 0.3609 |
| 1.2046 | 26.15 | 20500 | 0.2610 | 0.3508 |
| 1.2082 | 26.79 | 21000 | 0.2619 | 0.3461 |
| 1.2109 | 27.42 | 21500 | 0.2597 | 0.3502 |
| 1.2014 | 28.06 | 22000 | 0.2608 | 0.3468 |
| 1.1948 | 28.7 | 22500 | 0.2573 | 0.3457 |
| 1.205 | 29.34 | 23000 | 0.2619 | 0.3464 |
| 1.2019 | 29.97 | 23500 | 0.2559 | 0.3474 |
| 1.1917 | 30.61 | 24000 | 0.2601 | 0.3462 |
| 1.1939 | 31.25 | 24500 | 0.2575 | 0.3387 |
| 1.1882 | 31.89 | 25000 | 0.2535 | 0.3368 |
| 1.191 | 32.53 | 25500 | 0.2489 | 0.3365 |
| 1.1767 | 33.16 | 26000 | 0.2501 | 0.3347 |
| 1.167 | 33.8 | 26500 | 0.2504 | 0.3347 |
| 1.1678 | 34.44 | 27000 | 0.2480 | 0.3378 |
| 1.1803 | 35.08 | 27500 | 0.2487 | 0.3345 |
| 1.167 | 35.71 | 28000 | 0.2442 | 0.3319 |
| 1.1661 | 36.35 | 28500 | 0.2495 | 0.3334 |
| 1.164 | 36.99 | 29000 | 0.2472 | 0.3292 |
| 1.1578 | 37.63 | 29500 | 0.2442 | 0.3242 |
| 1.1584 | 38.27 | 30000 | 0.2431 | 0.3314 |
| 1.1526 | 38.9 | 30500 | 0.2441 | 0.3347 |
| 1.1542 | 39.54 | 31000 | 0.2437 | 0.3330 |
| 1.1508 | 40.18 | 31500 | 0.2433 | 0.3294 |
| 1.1406 | 40.82 | 32000 | 0.2434 | 0.3271 |
| 1.1514 | 41.45 | 32500 | 0.2426 | 0.3255 |
| 1.1418 | 42.09 | 33000 | 0.2432 | 0.3233 |
| 1.1365 | 42.73 | 33500 | 0.2436 | 0.3240 |
| 1.1348 | 43.37 | 34000 | 0.2483 | 0.3257 |
| 1.1301 | 44.01 | 34500 | 0.2420 | 0.3271 |
| 1.1268 | 44.64 | 35000 | 0.2472 | 0.3225 |
| 1.1224 | 45.28 | 35500 | 0.2382 | 0.3205 |
| 1.1224 | 45.92 | 36000 | 0.2388 | 0.3184 |
| 1.1198 | 46.56 | 36500 | 0.2382 | 0.3202 |
| 1.1274 | 47.19 | 37000 | 0.2404 | 0.3172 |
| 1.1147 | 47.83 | 37500 | 0.2394 | 0.3164 |
| 1.121 | 48.47 | 38000 | 0.2406 | 0.3202 |
| 1.1109 | 49.11 | 38500 | 0.2384 | 0.3154 |
| 1.1164 | 49.74 | 39000 | 0.2375 | 0.3169 |
| 1.1105 | 50.38 | 39500 | 0.2387 | 0.3173 |
| 1.1054 | 51.02 | 40000 | 0.2362 | 0.3120 |
| 1.0893 | 51.66 | 40500 | 0.2399 | 0.3130 |
| 1.0913 | 52.3 | 41000 | 0.2357 | 0.3088 |
| 1.1017 | 52.93 | 41500 | 0.2345 | 0.3084 |
| 1.0937 | 53.57 | 42000 | 0.2330 | 0.3140 |
| 1.0945 | 54.21 | 42500 | 0.2399 | 0.3107 |
| 1.0933 | 54.85 | 43000 | 0.2383 | 0.3134 |
| 1.0912 | 55.48 | 43500 | 0.2372 | 0.3077 |
| 1.0898 | 56.12 | 44000 | 0.2339 | 0.3083 |
| 1.0903 | 56.76 | 44500 | 0.2367 | 0.3065 |
| 1.0947 | 57.4 | 45000 | 0.2352 | 0.3104 |
| 1.0751 | 58.04 | 45500 | 0.2334 | 0.3084 |
| 1.09 | 58.67 | 46000 | 0.2328 | 0.3100 |
| 1.0876 | 59.31 | 46500 | 0.2276 | 0.3050 |
| 1.076 | 59.95 | 47000 | 0.2309 | 0.3047 |
| 1.086 | 60.59 | 47500 | 0.2293 | 0.3047 |
| 1.082 | 61.22 | 48000 | 0.2328 | 0.3027 |
| 1.0714 | 61.86 | 48500 | 0.2290 | 0.3020 |
| 1.0746 | 62.5 | 49000 | 0.2313 | 0.3059 |
| 1.076 | 63.14 | 49500 | 0.2342 | 0.3050 |
| 1.0648 | 63.78 | 50000 | 0.2286 | 0.3025 |
| 1.0586 | 64.41 | 50500 | 0.2338 | 0.3044 |
| 1.0753 | 65.05 | 51000 | 0.2308 | 0.3045 |
| 1.0664 | 65.69 | 51500 | 0.2273 | 0.3009 |
| 1.0739 | 66.33 | 52000 | 0.2298 | 0.3027 |
| 1.0695 | 66.96 | 52500 | 0.2247 | 0.2996 |
| 1.06 | 67.6 | 53000 | 0.2276 | 0.3015 |
| 1.0742 | 68.24 | 53500 | 0.2280 | 0.2974 |
| 1.0618 | 68.88 | 54000 | 0.2291 | 0.2989 |
| 1.062 | 69.52 | 54500 | 0.2302 | 0.2971 |
| 1.0572 | 70.15 | 55000 | 0.2280 | 0.2990 |
| 1.055 | 70.79 | 55500 | 0.2278 | 0.2983 |
| 1.0553 | 71.43 | 56000 | 0.2282 | 0.2991 |
| 1.0509 | 72.07 | 56500 | 0.2261 | 0.2959 |
| 1.0469 | 72.7 | 57000 | 0.2216 | 0.2919 |
| 1.0476 | 73.34 | 57500 | 0.2267 | 0.2989 |
| 1.0494 | 73.98 | 58000 | 0.2260 | 0.2960 |
| 1.0517 | 74.62 | 58500 | 0.2297 | 0.2989 |
| 1.0458 | 75.26 | 59000 | 0.2246 | 0.2923 |
| 1.0382 | 75.89 | 59500 | 0.2255 | 0.2922 |
| 1.0462 | 76.53 | 60000 | 0.2258 | 0.2954 |
| 1.0375 | 77.17 | 60500 | 0.2251 | 0.2929 |
| 1.0332 | 77.81 | 61000 | 0.2277 | 0.2940 |
| 1.0423 | 78.44 | 61500 | 0.2243 | 0.2896 |
| 1.0379 | 79.08 | 62000 | 0.2274 | 0.2928 |
| 1.0398 | 79.72 | 62500 | 0.2237 | 0.2928 |
| 1.0395 | 80.36 | 63000 | 0.2265 | 0.2956 |
| 1.0397 | 80.99 | 63500 | 0.2240 | 0.2920 |
| 1.0262 | 81.63 | 64000 | 0.2244 | 0.2934 |
| 1.0335 | 82.27 | 64500 | 0.2265 | 0.2936 |
| 1.0385 | 82.91 | 65000 | 0.2238 | 0.2928 |
| 1.0289 | 83.55 | 65500 | 0.2219 | 0.2912 |
| 1.0372 | 84.18 | 66000 | 0.2236 | 0.2898 |
| 1.0279 | 84.82 | 66500 | 0.2219 | 0.2902 |
| 1.0325 | 85.46 | 67000 | 0.2240 | 0.2908 |
| 1.0202 | 86.1 | 67500 | 0.2206 | 0.2886 |
| 1.0166 | 86.73 | 68000 | 0.2219 | 0.2886 |
| 1.0259 | 87.37 | 68500 | 0.2235 | 0.2897 |
| 1.0337 | 88.01 | 69000 | 0.2210 | 0.2873 |
| 1.0264 | 88.65 | 69500 | 0.2216 | 0.2882 |
| 1.0231 | 89.29 | 70000 | 0.2223 | 0.2899 |
| 1.0281 | 89.92 | 70500 | 0.2214 | 0.2872 |
| 1.0135 | 90.56 | 71000 | 0.2218 | 0.2868 |
| 1.0291 | 91.2 | 71500 | 0.2209 | 0.2863 |
| 1.0321 | 91.84 | 72000 | 0.2199 | 0.2876 |
| 1.028 | 92.47 | 72500 | 0.2214 | 0.2858 |
| 1.0213 | 93.11 | 73000 | 0.2219 | 0.2875 |
| 1.0261 | 93.75 | 73500 | 0.2232 | 0.2869 |
| 1.0197 | 94.39 | 74000 | 0.2227 | 0.2866 |
| 1.0298 | 95.03 | 74500 | 0.2228 | 0.2868 |
| 1.0192 | 95.66 | 75000 | 0.2230 | 0.2865 |
| 1.0156 | 96.3 | 75500 | 0.2220 | 0.2869 |
| 1.0075 | 96.94 | 76000 | 0.2223 | 0.2866 |
| 1.0201 | 97.58 | 76500 | 0.2219 | 0.2866 |
| 1.0159 | 98.21 | 77000 | 0.2219 | 0.2876 |
| 1.0087 | 98.85 | 77500 | 0.2219 | 0.2873 |
| 1.0159 | 99.49 | 78000 | 0.2223 | 0.2867 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
FounderOfHuggingface/gpt2_gen_lora_r16_ag_news_t200_e5_non_member_shadow18
|
FounderOfHuggingface
| 2023-12-20T16:04:22Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2023-12-20T16:04:19Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
emre/distilbert-base-uncased-finetuned-squad
|
emre
| 2023-12-20T16:03:57Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
base_model: distilbert-base-uncased
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2256 | 1.0 | 5533 | 1.1620 |
| 0.9551 | 2.0 | 11066 | 1.1237 |
| 0.7726 | 3.0 | 16599 | 1.1620 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
emre/opus-mt-tr-en-finetuned-en-to-tr
|
emre
| 2023-12-20T16:03:39Z | 28 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"base_model:Helsinki-NLP/opus-mt-tr-en",
"base_model:finetune:Helsinki-NLP/opus-mt-tr-en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-01-18T22:06:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
base_model: Helsinki-NLP/opus-mt-tr-en
model-index:
- name: opus-mt-tr-en-finetuned-en-to-tr
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: wmt16
type: wmt16
config: tr-en
split: train
args: tr-en
metrics:
- type: bleu
value: 6.471
name: Bleu
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-tr-en-finetuned-en-to-tr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tr-en](https://huggingface.co/Helsinki-NLP/opus-mt-tr-en) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9429
- Bleu: 6.471
- Gen Len: 56.1688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 1.5266 | 1.0 | 12860 | 2.2526 | 4.5834 | 55.6563 |
| 1.2588 | 2.0 | 25720 | 2.0113 | 5.9203 | 56.3506 |
| 1.1878 | 3.0 | 38580 | 1.9429 | 6.471 | 56.1688 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
patpizio/xlmr-et-en-all_shuffled-1986-test1000
|
patpizio
| 2023-12-20T16:03:18Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-20T15:58:43Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlmr-et-en-all_shuffled-1986-test1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr-et-en-all_shuffled-1986-test1000
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6014
- R Squared: 0.3080
- Mae: 0.5689
- Pearson R: 0.6032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1986
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | R Squared | Mae | Pearson R |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---------:|
| No log | 1.0 | 438 | 0.6151 | 0.2923 | 0.6190 | 0.5566 |
| 0.7316 | 2.0 | 876 | 0.6246 | 0.2813 | 0.5950 | 0.5899 |
| 0.5057 | 3.0 | 1314 | 0.6014 | 0.3080 | 0.5689 | 0.6032 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.