modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-06 00:36:47
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 540
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-06 00:36:27
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
simonycl/data-selection-Llama-2-7b-sharegpt-KMenasRandomDeita-1024-0.05-lora-epoch_4
|
simonycl
| 2024-01-25T14:07:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-25T14:06:43Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
MoulikBansal/test_test_chal_ja_bhai-merged-model
|
MoulikBansal
| 2024-01-25T14:02:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T14:02:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SoreHabte/distilbert-base-uncased-lora-text-classification
|
SoreHabte
| 2024-01-25T14:01:42Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-01-25T14:01:35Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-lora-text-classification
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-lora-text-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9528
- Accuracy: {'accuracy': 0.887}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|
| No log | 1.0 | 250 | 0.3796 | {'accuracy': 0.88} |
| 0.4157 | 2.0 | 500 | 0.4191 | {'accuracy': 0.879} |
| 0.4157 | 3.0 | 750 | 0.6114 | {'accuracy': 0.867} |
| 0.1906 | 4.0 | 1000 | 0.5635 | {'accuracy': 0.882} |
| 0.1906 | 5.0 | 1250 | 0.7240 | {'accuracy': 0.879} |
| 0.0727 | 6.0 | 1500 | 0.8097 | {'accuracy': 0.883} |
| 0.0727 | 7.0 | 1750 | 0.9097 | {'accuracy': 0.888} |
| 0.0275 | 8.0 | 2000 | 0.9154 | {'accuracy': 0.887} |
| 0.0275 | 9.0 | 2250 | 0.9432 | {'accuracy': 0.886} |
| 0.0133 | 10.0 | 2500 | 0.9528 | {'accuracy': 0.887} |
### Framework versions
- PEFT 0.4.0
- Transformers 4.37.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Jimmyhd/oneTextCol
|
Jimmyhd
| 2024-01-25T13:48:35Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T11:13:04Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
Skier8402/XLNetTokenizerFast_unigram_like
|
Skier8402
| 2024-01-25T13:37:11Z | 0 | 0 | null |
[
"NLP",
"Unigram",
"en",
"dataset:wikitext",
"license:apache-2.0",
"region:us"
] | null | 2024-01-25T12:50:41Z |
---
license: apache-2.0
datasets:
- wikitext
language:
- en
tags:
- NLP
- Unigram
---
|
thebabayaroy/q-FrozenLake-v1-4x4-noSlippery
|
thebabayaroy
| 2024-01-25T13:34:54Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-25T13:34:51Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="thebabayaroy/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tanatapanun/fine-tuned-BioBART-12-epochs-1024-input-128-output
|
tanatapanun
| 2024-01-25T13:30:43Z | 91 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:GanjinZero/biobart-base",
"base_model:finetune:GanjinZero/biobart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-25T13:14:47Z |
---
license: apache-2.0
base_model: GanjinZero/biobart-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: fine-tuned-BioBART-12-epochs-1024-input-128-output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-BioBART-12-epochs-1024-input-128-output
This model is a fine-tuned version of [GanjinZero/biobart-base](https://huggingface.co/GanjinZero/biobart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5051
- Rouge1: 0.1576
- Rouge2: 0.0366
- Rougel: 0.108
- Rougelsum: 0.1085
- Gen Len: 34.29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 151 | 4.4833 | 0.0206 | 0.0051 | 0.0196 | 0.02 | 7.18 |
| No log | 2.0 | 302 | 1.8298 | 0.1027 | 0.0249 | 0.0865 | 0.0866 | 26.29 |
| No log | 3.0 | 453 | 1.6450 | 0.0738 | 0.0169 | 0.0564 | 0.0568 | 20.31 |
| 3.8819 | 4.0 | 604 | 1.5676 | 0.1469 | 0.0336 | 0.1162 | 0.1163 | 33.72 |
| 3.8819 | 5.0 | 755 | 1.5271 | 0.1739 | 0.0342 | 0.1442 | 0.1441 | 29.99 |
| 3.8819 | 6.0 | 906 | 1.5005 | 0.1616 | 0.0337 | 0.1275 | 0.1286 | 36.78 |
| 1.1888 | 7.0 | 1057 | 1.4912 | 0.1618 | 0.0417 | 0.121 | 0.1212 | 41.89 |
| 1.1888 | 8.0 | 1208 | 1.4854 | 0.1293 | 0.0396 | 0.0905 | 0.0911 | 36.59 |
| 1.1888 | 9.0 | 1359 | 1.4949 | 0.1451 | 0.0337 | 0.1153 | 0.1158 | 27.18 |
| 0.8646 | 10.0 | 1510 | 1.4936 | 0.1743 | 0.0355 | 0.1292 | 0.1301 | 34.24 |
| 0.8646 | 11.0 | 1661 | 1.5060 | 0.1639 | 0.0367 | 0.1233 | 0.124 | 30.56 |
| 0.8646 | 12.0 | 1812 | 1.5051 | 0.1576 | 0.0366 | 0.108 | 0.1085 | 34.29 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.12.1+cu113
- Datasets 2.16.1
- Tokenizers 0.15.0
|
aymurai/flair-ner-spanish-judicial
|
aymurai
| 2024-01-25T13:26:28Z | 442 | 4 |
flair
|
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"es",
"dataset:ArJuzPCyF10",
"license:mit",
"region:us"
] |
token-classification
| 2023-01-26T13:18:50Z |
---
license: mit
language:
- es
tags:
- flair
- token-classification
- sequence-tagger-model
datasets:
- ArJuzPCyF10
metrics:
- precision
- recall
- f1-score
widget:
- text: 1. DECLARAR EXTINGUIDA LA ACCIÓN PENAL en este caso por cumplimiento de la suspensión del proceso a prueba, y SOBRESEER a EZEQUIEL CAMILO MARCONNI, DNI 11.222.333, en orden a los delitos de lesiones leves agravadas, amenazas simples y agravadas por el uso de armas.
library_name: flair
pipeline_tag: token-classification
---
# Model Description
Following the FLAIR guidelines for training a NER model, we trained a model on top of [BETO embeddings](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) (a spanish version of BERT trained in a spanish corpus) and a BiLSTM-CRF architecture.
This model was developed by [{ collective.ai }](https://collectiveai.io) as part of the [AymurAI](https://www.aymurai.info) project by [DataGenero](https://www.datagenero.org).
# About AymurAI, its uses and limitations
AymurAI is intended to be used as a tool to address the lack of available data in the judicial system on gender-based violence (GBV) rulings in Latin America. The goal is to increase report levels, build trust in the justice system, and improve access to justice for women and LGBTIQ+ people. AymurAI will generate and maintain anonymized datasets from legal rulings to understand GBV and support policy making, and also contribute to feminist collectives' campaigns.
AymurAI is still a prototype and is only being implemented in Criminal Court N°10 in the City of Buenos Aires, Argentina. Its capabilities are limited to semi-automated data collection and analysis, and the results may be subject to limitations such as the quality and consistency of the data, and the availability of the data. Additionally, the effectiveness of AymurAI in addressing the lack of transparency in the judicial system and improving access to justice may also depend on other factors such as the level of cooperation from court officials and the broader cultural and political context.
This model was trained with a closed dataset from an Argentine criminal court. It's is designed to identify and extract relevant information from court rulings related to GBV cases. The use of a domain specific dataset from an Argentine criminal court ensures that the model is tailored to the specific legal and cultural context, allowing for more accurate results. However, it also means that the model may not be applicable or effective in other countries or regions with different legal systems or cultural norms.
# Usage
## How to use the model in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)**.
Install it with `pip install flair`
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("aymurai/flair-ner-spanish-judicial")
# make example sentence
sentence = Sentence("1. DECLARAR EXTINGUIDA LA ACCIÓN PENAL en este caso por cumplimiento de la suspensión del proceso a prueba, y SOBRESEER a EZEQUIEL CAMILO MARCONNI, DNI 11.222.333, en orden a los delitos de lesiones leves agravadas, amenazas simples y agravadas por el uso de armas.")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span[2:11]: "EXTINGUIDA LA ACCIÓN PENAL en este caso por cumplimiento" → DETALLE (0.5498)
Span[13:18]: "suspensión del proceso a prueba" → OBJETO_DE_LA_RESOLUCION (0.5647)
Span[20:21]: "SOBRESEER" → DETALLE (0.7766)
Span[22:25]: "EZEQUIEL CAMILO MARCONNI" → NOMBRE (0.6454)
Span[35:36]: "lesiones" → CONDUCTA (0.9457)
Span[36:38]: "leves agravadas" → CONDUCTA_DESCRIPCION (0.8818)
Span[39:40]: "amenazas" → CONDUCTA (0.956)
Span[40:48]: "simples y agravadas por el uso de armas" → CONDUCTA_DESCRIPCION (0.6866)
```
## Using the model in AymurAI platform
Please refeer to [aymurai.info](https://www.aymurai.info) for more information of the full platform.
You can also check the development repository [here](https://github.com/aymurai/dev).
# Entities and metrics
## Description
Please refer to the entities' description table ([en](docs/en-entities-table.md)|[es](docs/es-entities-table.md)).
For a complete description about entities considered by AymurAI, refer to the [Glossary for the Dataset with gender perspective](https://docs.google.com/document/d/123B9T2abCEqBaxxOl5c7HBJZRdIMtKDWo6IKHIVil04/edit) written by [Criminal Court N°10](https://www.twitter.com/jpcyf10) (spanish only).
## Data
The model was trained with a dataset of 1200 legal rulings from an Argentine criminal court.
Due to the nature of the data the documents are kept private.
### List of annotation contributors
The dataset was manually annotated by:
* Diego Scopetta
* Franny Rodriguez Gerzovich ([email](fraanyrodriguez@gmail.com)|[linkedin](https://www.linkedin.com/in/francescarg))
* Laura Barreiro
* Matías Sosa
* Maximiliano Sosa
* Patricia Sandoval
* Santiago Bezchinsky ([email](santibezchinsky@gmail.com)|[linkedin](https://www.linkedin.com/in/santiago-bezchinsky))
* Zoe Rodriguez Gerzovich
## Metrics
| label | precision | recall | f1-score |
|-----------------------------------------------------|-----------|--------|----------|
| FECHA_DE_NACIMIENTO | 0.98 | 0.99 | 0.99 |
| FECHA_RESOLUCION | 0.95 | 0.98 | 0.96 |
| NACIONALIDAD | 0.94 | 0.98 | 0.96 |
| GENERO | 1.00 | 0.50 | 0.67 |
| HORA_DE_INICIO | 0.98 | 0.92 | 0.95 |
| NOMBRE | 0.94 | 0.95 | 0.95 |
| FRASES_AGRESION | 0.90 | 0.98 | 0.94 |
| HORA_DE_CIERRE | 0.90 | 0.92 | 0.91 |
| NIVEL_INSTRUCCION | 0.85 | 0.94 | 0.90 |
| N_EXPTE_EJE | 0.85 | 0.93 | 0.89 |
| TIPO_DE_RESOLUCION | 0.63 | 0.93 | 0.75 |
| VIOLENCIA_DE_GENERO | 0.49 | 0.59 | 0.54 |
| RELACION_Y_TIPO_ENTRE_ACUSADO/A_Y_DENUNCIANTE | 0.93 | 0.76 | 0.84 |
| HIJOS_HIJAS_EN_COMUN | 0.47 | 0.57 | 0.52 |
| MODALIDAD_DE_LA_VIOLENCIA | 0.57 | 0.56 | 0.57 |
| FECHA_DEL_HECHO | 0.83 | 0.83 | 0.83 |
| CONDUCTA | 0.79 | 0.67 | 0.73 |
| ART_INFRINGIDO | 0.76 | 0.74 | 0.75 |
| DETALLE | 0.53 | 0.37 | 0.43 |
| OBJETO_DE_LA_RESOLUCION | 0.60 | 0.78 | 0.68 |
| CONDUCTA_DESCRIPCION | 0.54 | 0.43 | 0.48 |
| LUGAR_DEL_HECHO | 0.75 | 0.47 | 0.58 |
| EDAD_AL_MOMENTO_DEL_HECHO | 0.50 | 0.20 | 0.29 |
| PERSONA_ACUSADA_NO_DETERMINADA | 0.71 | 0.19 | 0.30 |
| | | | |
| macro avg | 0.77 | 0.72 | 0.73 |
# GitHub
You can see our open-source development [here](https://github.com/AymurAI/).
# Citation
Please cite [the following paper](https://drive.google.com/file/d/1P-hW0JKXWZ44Fn94fDVIxQRTExkK6m4Y/view) when using AymurAI:
```bibtex
@techreport{feldfeber2022,
author = "Feldfeber, Ivana and Quiroga, Yasmín Belén and Guevara, Clarissa and Ciolfi Felice, Marianela",
title = "Feminisms in Artificial Intelligence: Automation Tools towards a Feminist Judiciary Reform in Argentina and Mexico",
institution = "DataGenero",
year = "2022",
url = "https://drive.google.com/file/d/1P-hW0JKXWZ44Fn94fDVIxQRTExkK6m4Y/view"
}
```
|
mlx-community/OpenHermes-2.5-Mistral-7B-4bit-mlx
|
mlx-community
| 2024-01-25T13:16:23Z | 9 | 2 |
mlx
|
[
"mlx",
"safetensors",
"mistral",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"en",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-25T12:50:29Z |
---
language:
- en
license: apache-2.0
tags:
- mistral
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- mlx
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: OpenHermes-2-Mistral-7B
results: []
---
# mlx-community/OpenHermes-2.5-Mistral-7B-4bit-mlx
This model was converted to MLX format from [`teknium/OpenHermes-2.5-Mistral-7B`]().
Refer to the [original model card](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/OpenHermes-2.5-Mistral-7B-4bit-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
mmnga/cyberagent-calm2-7b-chat-dpo-experimental-gguf
|
mmnga
| 2024-01-25T13:14:27Z | 213 | 4 | null |
[
"gguf",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T12:28:54Z |
---
license: cc-by-4.0
---
# cyberagent-calm2-7b-chat-dpo-experimental-gguf
[cyberagentさんが公開しているcalm2-7b-chat-dpo-experimental](https://huggingface.co/cyberagent/calm2-7b-chat-dpo-experimental)のggufフォーマット変換版です。
## convert
[calm2のconvert.py](https://gist.github.com/mmnga/6f3b0b9db773c7cc4d58421ea7ebf98e)です
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'cyberagent-calm2-7b-chat-dpo-experimental-q4_0.gguf' -p "USER: AIによって私達の暮らしはどのように変わりますか? \nASSISTANT: " -n 256 --temp 0.8
```
---
|
nosdigitalmedia/telegram-drugs-classification
|
nosdigitalmedia
| 2024-01-25T13:00:46Z | 0 | 8 |
sklearn
|
[
"sklearn",
"joblib",
"text-classification",
"nl",
"region:us"
] |
text-classification
| 2024-01-25T12:56:14Z |
---
tags:
- sklearn
- text-classification
language:
- nl
metrics:
- accuracy
- hamming-loss
---
# Model card for NOS Drug-Related Text Classification on Telegram
The NOS editorial team is conducting an investigation into drug-related messages on Telegram. Thousands of Telegram messages has been labeled as drugs-related content (or not), as well including detail regarding the specific type of drugs, and delivery method. The data is utilized in order to train a model to scale it up and automatically label millions more.
## Methodology
Primarily a Logistic Regression model has been trained for binary classification. Text data was converted to numeric values using the Tfidf Vectorizer, considering term frequency-inverse document frequency (TF-IDF). This transformation enables the model to learn patterns and relationships between words. The model achieved 97% accuracy on the test set.
To take tasks with multiple possible labels into consideration, a MultiOutputClassifier was employed as an extension. This addresses the complexity of associating a text message with multiple categories such as "soft drugs," "hard drugs," and "medicines”. One-Hot Encoding was used for multi-label transformation.
Performance evaluation utilized Hamming Loss, a metric suitable for multi-label classification. The model demonstrated a Hamming Loss of 0.04, indicating 96% accuracy per label.
### Tools used to train the model
• Python
• scikit-learn
• pandas
• numpy
### How to Get Started with the Model
Use the code below to get started with the model.
```python
from joblib import load
# load the model
clf = load('model.joblib')
# make some predictions
text_messages = [
"""
Oud kleding te koop! Stuur een berichtje
We repareren ook!
""",
"""
COKE/XTC
* 1Gram = €50
* 5Gram = €230
"""]
mapping = {0:"bezorging", 1:"bulk", 2:"designer", 3:"drugsad", 4:"geendrugsad", 5:"harddrugs", 6:"medicijnen", 7: "pickup", 8: "post", 9:"softdrugs"}
labels = []
for message in clf.predict(text_messages):
label = []
for idx, labeled in enumerate(message):
if labeled == 1:
label.append(mapping[idx])
labels.append(label)
print(labels)
```
## Details
- **Shared by** Dutch Public Broadcasting Foundation (NOS)
- **Model type:** text-classification
- **Language:** Dutch
- **License:** Creative Commons Attribution Non Commercial No Derivatives 4.0
|
jlbaker361/res100e
|
jlbaker361
| 2024-01-25T12:52:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-20T08:04:07Z |
---
{}
---
trained to classify images
epochs: 100
dataset jlbaker361/wikiart
n classes 27
image dim 224
pretrained version resnet18
batch_size 4
|
Nehc/gia
|
Nehc
| 2024-01-25T12:51:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-25T10:23:23Z |
Checkpoint for https://github.com/Nehc/gia
|
sosancn/clasificador-muchocine
|
sosancn
| 2024-01-25T12:49:57Z | 90 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"base_model:finetune:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-25T12:49:35Z |
---
base_model: mrm8488/electricidad-base-discriminator
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-muchocine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4858
- Accuracy: 0.4297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3230 | 0.3806 |
| 1.3897 | 2.0 | 776 | 1.3217 | 0.4258 |
| 0.9605 | 3.0 | 1164 | 1.4858 | 0.4297 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
wahaha1987/a2c-PandaPickAndPlace-v3
|
wahaha1987
| 2024-01-25T12:47:37Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-25T12:43:23Z |
---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -43.00 +/- 15.52
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tanatapanun/fine-tuned-BioBART-15-epochs-1024-input-128-output
|
tanatapanun
| 2024-01-25T12:34:27Z | 96 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:GanjinZero/biobart-base",
"base_model:finetune:GanjinZero/biobart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-25T12:15:04Z |
---
license: apache-2.0
base_model: GanjinZero/biobart-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: fine-tuned-BioBART-15-epochs-1024-input-128-output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-BioBART-15-epochs-1024-input-128-output
This model is a fine-tuned version of [GanjinZero/biobart-base](https://huggingface.co/GanjinZero/biobart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5422
- Rouge1: 0.1912
- Rouge2: 0.042
- Rougel: 0.1492
- Rougelsum: 0.15
- Gen Len: 29.97
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 151 | 5.4038 | 0.0026 | 0.0006 | 0.0024 | 0.0024 | 5.98 |
| No log | 2.0 | 302 | 1.8682 | 0.0475 | 0.0066 | 0.0452 | 0.043 | 6.59 |
| No log | 3.0 | 453 | 1.6512 | 0.0753 | 0.0169 | 0.0587 | 0.0586 | 22.62 |
| 4.1375 | 4.0 | 604 | 1.5702 | 0.1472 | 0.0366 | 0.1124 | 0.1113 | 42.92 |
| 4.1375 | 5.0 | 755 | 1.5256 | 0.167 | 0.0337 | 0.1309 | 0.1305 | 45.89 |
| 4.1375 | 6.0 | 906 | 1.5057 | 0.1435 | 0.0305 | 0.1132 | 0.1134 | 32.45 |
| 1.1893 | 7.0 | 1057 | 1.4854 | 0.1655 | 0.0388 | 0.129 | 0.1295 | 34.34 |
| 1.1893 | 8.0 | 1208 | 1.4845 | 0.1635 | 0.0423 | 0.1238 | 0.1252 | 37.77 |
| 1.1893 | 9.0 | 1359 | 1.4980 | 0.1712 | 0.0363 | 0.1382 | 0.1388 | 29.68 |
| 0.8262 | 10.0 | 1510 | 1.5052 | 0.1917 | 0.0431 | 0.1486 | 0.1497 | 32.88 |
| 0.8262 | 11.0 | 1661 | 1.5167 | 0.1731 | 0.0374 | 0.1402 | 0.1403 | 29.9 |
| 0.8262 | 12.0 | 1812 | 1.5267 | 0.1675 | 0.035 | 0.1335 | 0.1337 | 29.35 |
| 0.8262 | 13.0 | 1963 | 1.5329 | 0.1839 | 0.0401 | 0.1465 | 0.1465 | 28.23 |
| 0.61 | 14.0 | 2114 | 1.5440 | 0.1904 | 0.0452 | 0.1522 | 0.1527 | 29.33 |
| 0.61 | 15.0 | 2265 | 1.5422 | 0.1912 | 0.042 | 0.1492 | 0.15 | 29.97 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.12.1+cu113
- Datasets 2.16.1
- Tokenizers 0.15.0
|
CLMBR/npi-only-transformer-2
|
CLMBR
| 2024-01-25T12:28:06Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T14:30:13Z |
---
tags:
- generated_from_trainer
model-index:
- name: npi-only-transformer-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# npi-only-transformer-2
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2265 | 0.03 | 76320 | 4.1999 |
| 4.0215 | 1.03 | 152640 | 4.0292 |
| 3.9133 | 0.03 | 228960 | 3.9544 |
| 3.8411 | 1.03 | 305280 | 3.9125 |
| 3.7911 | 0.03 | 381600 | 3.8867 |
| 3.7507 | 1.03 | 457920 | 3.8704 |
| 3.7187 | 0.03 | 534240 | 3.8599 |
| 3.6904 | 1.03 | 610560 | 3.8540 |
| 3.6628 | 0.03 | 686880 | 3.8482 |
| 3.6345 | 1.03 | 763200 | 3.8461 |
| 3.6098 | 0.03 | 839520 | 3.8440 |
| 3.5897 | 1.03 | 915840 | 3.8438 |
| 3.572 | 0.03 | 992160 | 3.8438 |
| 3.5528 | 1.03 | 1068480 | 3.8441 |
| 3.5343 | 0.03 | 1144800 | 3.8445 |
| 3.5293 | 1.03 | 1221120 | 3.8463 |
| 3.5111 | 0.03 | 1297440 | 3.8466 |
| 3.499 | 1.03 | 1373760 | 3.8482 |
| 3.4854 | 0.03 | 1450080 | 3.8505 |
| 3.4724 | 0.03 | 1526400 | 3.8520 |
| 3.4654 | 1.03 | 1602720 | 3.8540 |
| 3.455 | 0.03 | 1679040 | 3.8551 |
| 3.4482 | 1.03 | 1755360 | 3.8560 |
| 3.4398 | 0.03 | 1831680 | 3.8573 |
| 3.4268 | 1.03 | 1908000 | 3.8586 |
| 3.4123 | 0.03 | 1984320 | 3.8589 |
| 3.3989 | 0.03 | 2060640 | 3.8610 |
| 3.3867 | 1.03 | 2136960 | 3.8618 |
| 3.3755 | 0.03 | 2213280 | 3.8635 |
| 3.364 | 1.03 | 2289600 | 3.8636 |
| 3.3533 | 0.03 | 2365920 | 3.8637 |
| 3.3544 | 0.03 | 2442240 | 3.8637 |
| 3.3405 | 1.03 | 2518560 | 3.8649 |
| 3.3325 | 0.03 | 2594880 | 3.8654 |
| 3.3219 | 1.03 | 2671200 | 3.8645 |
| 3.3145 | 0.03 | 2747520 | 3.8646 |
| 3.3084 | 1.03 | 2823840 | 3.8649 |
| 3.3026 | 0.03 | 2900160 | 3.8641 |
| 3.299 | 1.03 | 2976480 | 3.8632 |
| 3.2939 | 0.02 | 3052726 | 3.8616 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Athuin/tinyLama-german
|
Athuin
| 2024-01-25T12:14:04Z | 115 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/tinyllama",
"base_model:finetune:unsloth/tinyllama",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T12:10:15Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/tinyllama
---
# Uploaded model
- **Developed by:** Athuin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ramsi-k/ppo-Huggy
|
ramsi-k
| 2024-01-25T12:11:31Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-01-25T12:11:14Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ramsi-k/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
dvilasuero/phi2-lora-quantized-distilabel-intel-orca-dpo-pairs
|
dvilasuero
| 2024-01-25T12:06:01Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-01-25T12:05:49Z |
---
license: mit
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: phi2-lora-quantized-distilabel-intel-orca-dpo-pairs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi2-lora-quantized-distilabel-intel-orca-dpo-pairs
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5173
- Rewards/chosen: -0.0019
- Rewards/rejected: -0.7725
- Rewards/accuracies: 0.7816
- Rewards/margins: 0.7706
- Logps/rejected: -233.5226
- Logps/chosen: -214.1249
- Logits/rejected: 0.3181
- Logits/chosen: 0.2015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6887 | 0.14 | 20 | 0.6767 | 0.0030 | -0.0331 | 0.6341 | 0.0361 | -226.1282 | -214.0752 | 0.2238 | 0.1343 |
| 0.6472 | 0.27 | 40 | 0.6171 | 0.0141 | -0.1710 | 0.7639 | 0.1852 | -227.5079 | -213.9642 | 0.2464 | 0.1508 |
| 0.5759 | 0.41 | 60 | 0.5584 | 0.0123 | -0.4023 | 0.7808 | 0.4146 | -229.8206 | -213.9829 | 0.2774 | 0.1736 |
| 0.526 | 0.54 | 80 | 0.5326 | 0.0036 | -0.5790 | 0.7816 | 0.5826 | -231.5877 | -214.0700 | 0.2983 | 0.1884 |
| 0.4963 | 0.68 | 100 | 0.5225 | 0.0020 | -0.6964 | 0.7825 | 0.6984 | -232.7611 | -214.0853 | 0.3131 | 0.1986 |
| 0.4977 | 0.81 | 120 | 0.5188 | -0.0025 | -0.7533 | 0.7816 | 0.7508 | -233.3300 | -214.1302 | 0.3162 | 0.2002 |
| 0.4818 | 0.95 | 140 | 0.5173 | -0.0019 | -0.7725 | 0.7816 | 0.7706 | -233.5226 | -214.1249 | 0.3181 | 0.2015 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.1
- Pytorch 2.1.0+cu118
- Datasets 2.16.1
- Tokenizers 0.15.1
|
rcade/glue_rcade_test_model
|
rcade
| 2024-01-25T12:05:23Z | 96 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-25T12:04:07Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: glue_rcade_test_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glue_rcade_test_model
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4713
- Accuracy: 0.8358
- F1: 0.8870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5838 | 1.0 | 230 | 0.5101 | 0.7672 | 0.8455 |
| 0.3994 | 2.0 | 460 | 0.5092 | 0.8088 | 0.875 |
| 0.2417 | 3.0 | 690 | 0.4713 | 0.8358 | 0.8870 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
tiagoblima/mbart50-qg-ap-peft-oficial
|
tiagoblima
| 2024-01-25T12:03:37Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"dataset:tiagoblima/du-qg-squadv1_pt",
"base_model:facebook/mbart-large-50",
"base_model:finetune:facebook/mbart-large-50",
"license:mit",
"region:us"
] | null | 2024-01-25T11:07:51Z |
---
license: mit
base_model: facebook/mbart-large-50
tags:
- generated_from_trainer
datasets:
- tiagoblima/du-qg-squadv1_pt
model-index:
- name: mbart50-qg-ap-peft-oficial
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart50-qg-ap-peft-oficial
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the tiagoblima/du-qg-squadv1_pt dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5230
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.8113 | 1.0 | 2367 | 2.6544 |
| 2.5839 | 2.0 | 4734 | 2.5805 |
| 2.4789 | 3.0 | 7101 | 2.5482 |
| 2.4232 | 4.0 | 9468 | 2.5294 |
| 2.3971 | 5.0 | 11835 | 2.5230 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.1
|
rcade/test_falcon_model_learning
|
rcade
| 2024-01-25T12:03:14Z | 92 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-24T14:11:14Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: test_falcon_model_learning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_falcon_model_learning
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4713
- Accuracy: 0.8358
- F1: 0.8870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5838 | 1.0 | 230 | 0.5101 | 0.7672 | 0.8455 |
| 0.3994 | 2.0 | 460 | 0.5092 | 0.8088 | 0.875 |
| 0.2417 | 3.0 | 690 | 0.4713 | 0.8358 | 0.8870 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
iamgokul/my-pet-dog-ggs
|
iamgokul
| 2024-01-25T12:02:34Z | 3 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-25T11:58:00Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-ggs Dreambooth model trained by iamgokul following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 21EC15
Sample pictures of this concept:




|
microsoft/beit-large-finetuned-ade-640-640
|
microsoft
| 2024-01-25T11:51:04Z | 486 | 14 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"beit",
"vision",
"image-segmentation",
"dataset:scene_parse_150",
"arxiv:2106.08254",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- vision
- image-segmentation
datasets:
- scene_parse_150
widget:
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg
example_title: House
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg
example_title: Castle
---
# BEiT (large-sized model, fine-tuned on ADE20k)
BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on [ADE20k](https://huggingface.co/datasets/scene_parse_150) (an important benchmark for semantic segmentation of images) at resolution 640x640. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit).
Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches.
Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: for semantic segmentation, one can just add one of the decode heads available in the [mmseg library](https://github.com/open-mmlab/mmsegmentation) for example, and fine-tune the model in a supervised fashion on annotated images. This is what the authors did: they fine-tuned BEiT with an UperHead segmentation decode head, allowing it to obtain SOTA results on important benchmarks such as ADE20k and CityScapes.
## Intended uses & limitations
You can use the raw model for semantic segmentation of images. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model for semantic segmentation:
```python
from transformers import BeitFeatureExtractor, BeitForSemanticSegmentation
from datasets import load_dataset
from PIL import Image
# load ADE20k image
ds = load_dataset("hf-internal-testing/fixtures_ade20k", split="test")
feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-large-finetuned-ade-640-640')
model = BeitForSemanticSegmentation.from_pretrained('microsoft/beit-large-finetuned-ade-640-640')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# logits are of shape (batch_size, num_labels, height/4, width/4)
logits = outputs.logits
```
Currently, both the feature extractor and model support PyTorch.
## Training data
This BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ADE20k](http://sceneparsing.csail.mit.edu/), a dataset consisting of thousands of annotated images and 150 classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py).
Images are cropped and padded to the same resolution (640x640) and normalized across the RGB channels with the ImageNet mean and standard deviation.
### Pretraining
For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254).
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```@article{DBLP:journals/corr/abs-2106-08254,
author = {Hangbo Bao and
Li Dong and
Furu Wei},
title = {BEiT: {BERT} Pre-Training of Image Transformers},
journal = {CoRR},
volume = {abs/2106.08254},
year = {2021},
url = {https://arxiv.org/abs/2106.08254},
archivePrefix = {arXiv},
eprint = {2106.08254},
timestamp = {Tue, 29 Jun 2021 16:55:04 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
haturusinghe/1st_f1_0.5936_acc_0.83_xlm-roberta-large_mrp_2e-05_8_RAdam
|
haturusinghe
| 2024-01-25T11:50:00Z | 19 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T11:46:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Lalith16/Zephyr7bfinetunedepoch5latest
|
Lalith16
| 2024-01-25T11:46:16Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:finetune:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"region:us"
] | null | 2024-01-25T11:45:32Z |
---
license: mit
base_model: HuggingFaceH4/zephyr-7b-beta
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.367 | 0.35 | 100 | 1.3594 |
| 0.774 | 0.69 | 200 | 0.9080 |
| 0.7724 | 1.04 | 300 | 0.7654 |
| 0.6987 | 1.39 | 400 | 0.7205 |
| 0.6482 | 1.74 | 500 | 0.6828 |
| 0.5204 | 2.08 | 600 | 0.6499 |
| 0.5828 | 2.43 | 700 | 0.6353 |
| 0.522 | 2.78 | 800 | 0.6170 |
| 0.3334 | 3.12 | 900 | 0.6522 |
| 0.3643 | 3.47 | 1000 | 0.6149 |
| 0.3551 | 3.82 | 1100 | 0.6138 |
| 0.3144 | 4.17 | 1200 | 0.6430 |
| 0.3194 | 4.51 | 1300 | 0.6468 |
| 0.3341 | 4.86 | 1400 | 0.6287 |
| 0.2982 | 5.21 | 1500 | 0.6403 |
| 0.3056 | 5.56 | 1600 | 0.6394 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
haturusinghe/1st_f1_0.574_acc_0.922_25_01-1140_xlm-roberta-large_mrp_2e-05_8_RAdam
|
haturusinghe
| 2024-01-25T11:44:16Z | 20 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T11:40:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
elliotthwangmsa/KimLan-phi-2_zh
|
elliotthwangmsa
| 2024-01-25T11:41:54Z | 34 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T11:38:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lokaspire/mistral-latest
|
lokaspire
| 2024-01-25T11:38:10Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:finetune:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-01-25T09:48:31Z |
---
license: apache-2.0
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
tags:
- generated_from_trainer
model-index:
- name: mistral-latest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-latest
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 500
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
|
nm-testing/TinyLlama-1.1B-Chat-v1.0-gsm8k-pruned50-quant-ds
|
nm-testing
| 2024-01-25T11:38:04Z | 2 | 0 |
transformers
|
[
"transformers",
"onnx",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T11:28:05Z |
```python
from deepsparse import TextGeneration
prompt = "James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week?"
formatted_prompt = f"Question:{prompt}\nAnswer:"
model = TextGeneration(model_path="/home/mwiti/mwitiderrick/final_obcq_deployment/deployment")
print(model(formatted_prompt, max_new_tokens=200).generations[0].text)
"""
He runs 30*3=<<30*3=120>>120 meters a week
So he runs 120*120=<<120*120=12000>>12000 meters a week
#### 12000
"""
```
|
simonycl/data-selection-Llama-2-7b-sharegpt-KMenasRandomDeita-1024-0.05-lora
|
simonycl
| 2024-01-25T11:36:31Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-25T11:36:21Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
LoneStriker/Everyone-Coder-33b-Base-5.0bpw-h6-exl2
|
LoneStriker
| 2024-01-25T11:27:30Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T11:18:23Z |
---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
tags:
- merge
---
Everyone-Coder-33b-Base

EveryoneLLM series of models made by the community, for the community. This is a coding specific model made using fine-tunes of deekseekcoder-33b-base.
______________________________________________________________________________________________________________
Im having trouble benchmarking this model because I suck at running llm benchmarks, but from hand testing running the model through https://edabit.com/challenge coding challenges vs up to date gpt-4. My model is hands down beating it in coding.
______________________________________________________________________________________________________________
Ive recently noticed this model has trouble with end tokens so I made a custom prompt template for it. Made sure to add (Always end with "<|EOT|>") In addition to your system prompt and (Always end your response with "<|EOT|>") at the end of the User message is the preset. Then add <|EOT|> as a custom stop string in your LM text generating interface.
```
Always end with "<|EOT|>"
{System}
<|User|>
{User}. Always end your response with "<|EOT|>"
<|Assistant|>
{Assistant}
```
The models that were used in this merger were as follow:
- https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct
- https://huggingface.co/codefuse-ai/CodeFuse-DeepSeek-33B
- https://huggingface.co/WizardLM/WizardCoder-33B-V1.1
Thank you to the creators of the above ai models, they have full credit for the EveryoneLLM series of models. Without their hard work we wouldnt be able to achieve the great success we have in the open source community. 💗
You can find the write up for merging models here:
https://docs.google.com/document/d/1_vOftBnrk9NRk5h10UqrfJ5CDih9KBKL61yvrZtVWPE/edit?usp=sharing
Config for the merger can be found bellow:
```yaml
models:
- model: WizardLM_WizardCoder-33B-V1.1
parameters:
density: 1
weight: .5
- model: codefuse-ai_CodeFuse-DeepSeek-33B
parameters:
density: 1
weight: .5
merge_method: ties
base_model: deepseek-ai_deepseek-coder-33b-instruct
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
LoneStriker/Everyone-Coder-33b-Base-4.65bpw-h6-exl2
|
LoneStriker
| 2024-01-25T11:18:21Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T11:09:50Z |
---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
tags:
- merge
---
Everyone-Coder-33b-Base

EveryoneLLM series of models made by the community, for the community. This is a coding specific model made using fine-tunes of deekseekcoder-33b-base.
______________________________________________________________________________________________________________
Im having trouble benchmarking this model because I suck at running llm benchmarks, but from hand testing running the model through https://edabit.com/challenge coding challenges vs up to date gpt-4. My model is hands down beating it in coding.
______________________________________________________________________________________________________________
Ive recently noticed this model has trouble with end tokens so I made a custom prompt template for it. Made sure to add (Always end with "<|EOT|>") In addition to your system prompt and (Always end your response with "<|EOT|>") at the end of the User message is the preset. Then add <|EOT|> as a custom stop string in your LM text generating interface.
```
Always end with "<|EOT|>"
{System}
<|User|>
{User}. Always end your response with "<|EOT|>"
<|Assistant|>
{Assistant}
```
The models that were used in this merger were as follow:
- https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct
- https://huggingface.co/codefuse-ai/CodeFuse-DeepSeek-33B
- https://huggingface.co/WizardLM/WizardCoder-33B-V1.1
Thank you to the creators of the above ai models, they have full credit for the EveryoneLLM series of models. Without their hard work we wouldnt be able to achieve the great success we have in the open source community. 💗
You can find the write up for merging models here:
https://docs.google.com/document/d/1_vOftBnrk9NRk5h10UqrfJ5CDih9KBKL61yvrZtVWPE/edit?usp=sharing
Config for the merger can be found bellow:
```yaml
models:
- model: WizardLM_WizardCoder-33B-V1.1
parameters:
density: 1
weight: .5
- model: codefuse-ai_CodeFuse-DeepSeek-33B
parameters:
density: 1
weight: .5
merge_method: ties
base_model: deepseek-ai_deepseek-coder-33b-instruct
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
G-Root/speaker-diarization-optimized
|
G-Root
| 2024-01-25T11:15:38Z | 4 | 0 |
pyannote-audio
|
[
"pyannote-audio",
"pyannote",
"pyannote-audio-pipeline",
"audio",
"voice",
"speech",
"speaker",
"speaker-diarization",
"speaker-change-detection",
"voice-activity-detection",
"overlapped-speech-detection",
"automatic-speech-recognition",
"arxiv:2111.14448",
"arxiv:2012.01477",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-25T11:13:37Z |
---
tags:
- pyannote
- pyannote-audio
- pyannote-audio-pipeline
- audio
- voice
- speech
- speaker
- speaker-diarization
- speaker-change-detection
- voice-activity-detection
- overlapped-speech-detection
- automatic-speech-recognition
license: mit
extra_gated_prompt: "The collected information will help acquire a better knowledge of pyannote.audio userbase and help its maintainers improve it further. Though this pipeline uses MIT license and will always remain open-source, we will occasionnally email you about premium pipelines and paid services around pyannote."
extra_gated_fields:
Company/university: text
Website: text
---
Using this open-source pipeline in production?
Make the most of it thanks to our [consulting services](https://herve.niderb.fr/consulting.html).
# 🎹 Speaker diarization 3.1
This pipeline is the same as [`pyannote/speaker-diarization-3.0`](https://hf.co/pyannote/speaker-diarization-3.1) except it removes the [problematic](https://github.com/pyannote/pyannote-audio/issues/1537) use of `onnxruntime`.
Both speaker segmentation and embedding now run in pure PyTorch. This should ease deployment and possibly speed up inference.
It requires pyannote.audio version 3.1 or higher.
It ingests mono audio sampled at 16kHz and outputs speaker diarization as an [`Annotation`](http://pyannote.github.io/pyannote-core/structure.html#annotation) instance:
- stereo or multi-channel audio files are automatically downmixed to mono by averaging the channels.
- audio files sampled at a different rate are resampled to 16kHz automatically upon loading.
## Requirements
1. Install [`pyannote.audio`](https://github.com/pyannote/pyannote-audio) `3.1` with `pip install pyannote.audio`
2. Accept [`pyannote/segmentation-3.0`](https://hf.co/pyannote/segmentation-3.0) user conditions
3. Accept [`pyannote/speaker-diarization-3.1`](https://hf.co/pyannote-speaker-diarization-3.1) user conditions
4. Create access token at [`hf.co/settings/tokens`](https://hf.co/settings/tokens).
## Usage
```python
# instantiate the pipeline
from pyannote.audio import Pipeline
pipeline = Pipeline.from_pretrained(
"pyannote/speaker-diarization-3.1",
use_auth_token="HUGGINGFACE_ACCESS_TOKEN_GOES_HERE")
# run the pipeline on an audio file
diarization = pipeline("audio.wav")
# dump the diarization output to disk using RTTM format
with open("audio.rttm", "w") as rttm:
diarization.write_rttm(rttm)
```
### Processing on GPU
`pyannote.audio` pipelines run on CPU by default.
You can send them to GPU with the following lines:
```python
import torch
pipeline.to(torch.device("cuda"))
```
### Processing from memory
Pre-loading audio files in memory may result in faster processing:
```python
waveform, sample_rate = torchaudio.load("audio.wav")
diarization = pipeline({"waveform": waveform, "sample_rate": sample_rate})
```
### Monitoring progress
Hooks are available to monitor the progress of the pipeline:
```python
from pyannote.audio.pipelines.utils.hook import ProgressHook
with ProgressHook() as hook:
diarization = pipeline("audio.wav", hook=hook)
```
### Controlling the number of speakers
In case the number of speakers is known in advance, one can use the `num_speakers` option:
```python
diarization = pipeline("audio.wav", num_speakers=2)
```
One can also provide lower and/or upper bounds on the number of speakers using `min_speakers` and `max_speakers` options:
```python
diarization = pipeline("audio.wav", min_speakers=2, max_speakers=5)
```
## Benchmark
This pipeline has been benchmarked on a large collection of datasets.
Processing is fully automatic:
- no manual voice activity detection (as is sometimes the case in the literature)
- no manual number of speakers (though it is possible to provide it to the pipeline)
- no fine-tuning of the internal models nor tuning of the pipeline hyper-parameters to each dataset
... with the least forgiving diarization error rate (DER) setup (named _"Full"_ in [this paper](https://doi.org/10.1016/j.csl.2021.101254)):
- no forgiveness collar
- evaluation of overlapped speech
| Benchmark | [DER%](. "Diarization error rate") | [FA%](. "False alarm rate") | [Miss%](. "Missed detection rate") | [Conf%](. "Speaker confusion rate") | Expected output | File-level evaluation |
| ------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------- | --------------------------- | ---------------------------------- | ----------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
| [AISHELL-4](http://www.openslr.org/111/) | 12.2 | 3.8 | 4.4 | 4.0 | [RTTM](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/AISHELL.SpeakerDiarization.Benchmark.test.rttm) | [eval](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/AISHELL.SpeakerDiarization.Benchmark.test.eval) |
| [AliMeeting (_channel 1_)](https://www.openslr.org/119/) | 24.4 | 4.4 | 10.0 | 10.0 | [RTTM](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/AliMeeting.SpeakerDiarization.Benchmark.test.rttm) | [eval](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/AliMeeting.SpeakerDiarization.Benchmark.test.eval) |
| [AMI (_headset mix,_](https://groups.inf.ed.ac.uk/ami/corpus/) [_only_words_)](https://github.com/BUTSpeechFIT/AMI-diarization-setup) | 18.8 | 3.6 | 9.5 | 5.7 | [RTTM](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/AMI.SpeakerDiarization.Benchmark.test.rttm) | [eval](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/AMI.SpeakerDiarization.Benchmark.test.eval) |
| [AMI (_array1, channel 1,_](https://groups.inf.ed.ac.uk/ami/corpus/) [_only_words)_](https://github.com/BUTSpeechFIT/AMI-diarization-setup) | 22.4 | 3.8 | 11.2 | 7.5 | [RTTM](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/AMI-SDM.SpeakerDiarization.Benchmark.test.rttm) | [eval](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/AMI-SDM.SpeakerDiarization.Benchmark.test.eval) |
| [AVA-AVD](https://arxiv.org/abs/2111.14448) | 50.0 | 10.8 | 15.7 | 23.4 | [RTTM](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/AVA-AVD.SpeakerDiarization.Benchmark.test.rttm) | [eval](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/AVA-AVD.SpeakerDiarization.Benchmark.test.eval) |
| [DIHARD 3 (_Full_)](https://arxiv.org/abs/2012.01477) | 21.7 | 6.2 | 8.1 | 7.3 | [RTTM](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/DIHARD.SpeakerDiarization.Benchmark.test.rttm) | [eval](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/DIHARD.SpeakerDiarization.Benchmark.test.eval) |
| [MSDWild](https://x-lance.github.io/MSDWILD/) | 25.3 | 5.8 | 8.0 | 11.5 | [RTTM](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/MSDWILD.SpeakerDiarization.Benchmark.test.rttm) | [eval](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/MSDWILD.SpeakerDiarization.Benchmark.test.eval) |
| [REPERE (_phase 2_)](https://islrn.org/resources/360-758-359-485-0/) | 7.8 | 1.8 | 2.6 | 3.5 | [RTTM](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/REPERE.SpeakerDiarization.Benchmark.test.rttm) | [eval](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/REPERE.SpeakerDiarization.Benchmark.test.eval) |
| [VoxConverse (_v0.3_)](https://github.com/joonson/voxconverse) | 11.3 | 4.1 | 3.4 | 3.8 | [RTTM](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/VoxConverse.SpeakerDiarization.Benchmark.test.rttm) | [eval](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/VoxConverse.SpeakerDiarization.Benchmark.test.eval) |
## Citations
```bibtex
@inproceedings{Plaquet23,
author={Alexis Plaquet and Hervé Bredin},
title={{Powerset multi-class cross entropy loss for neural speaker diarization}},
year=2023,
booktitle={Proc. INTERSPEECH 2023},
}
```
```bibtex
@inproceedings{Bredin23,
author={Hervé Bredin},
title={{pyannote.audio 2.1 speaker diarization pipeline: principle, benchmark, and recipe}},
year=2023,
booktitle={Proc. INTERSPEECH 2023},
}
```
|
emonikram/dl_project
|
emonikram
| 2024-01-25T11:07:28Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-01-24T20:10:42Z |
---
title: Cat Vs Rabbit Classification
emoji: 🌍
colorFrom: yellow
colorTo: purple
sdk: gradio
sdk_version: 4.15.0
app_file: app.py
pinned: false
license: mit
contributed by: Ikramul Islam Emon
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
Artefact2/Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES-GGUF
|
Artefact2
| 2024-01-25T11:07:08Z | 51 | 3 | null |
[
"gguf",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-20T15:01:17Z |
---
license: apache-2.0
language:
- en
---
These are GGUF quantized versions of [notstoic/Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES](https://huggingface.co/notstoic/Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES).
The importance matrix was trained for 1M tokens (2,000 batches of 512 tokens) using `wiki.train.raw`.
The IQ2_XXS and IQ2_XS versions are compatible with llama.cpp, version `147b17a` or later.
|
LoneStriker/Everyone-Coder-33b-Base-3.0bpw-h6-exl2
|
LoneStriker
| 2024-01-25T11:02:17Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T10:56:01Z |
---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
tags:
- merge
---
Everyone-Coder-33b-Base

EveryoneLLM series of models made by the community, for the community. This is a coding specific model made using fine-tunes of deekseekcoder-33b-base.
______________________________________________________________________________________________________________
Im having trouble benchmarking this model because I suck at running llm benchmarks, but from hand testing running the model through https://edabit.com/challenge coding challenges vs up to date gpt-4. My model is hands down beating it in coding.
______________________________________________________________________________________________________________
Ive recently noticed this model has trouble with end tokens so I made a custom prompt template for it. Made sure to add (Always end with "<|EOT|>") In addition to your system prompt and (Always end your response with "<|EOT|>") at the end of the User message is the preset. Then add <|EOT|> as a custom stop string in your LM text generating interface.
```
Always end with "<|EOT|>"
{System}
<|User|>
{User}. Always end your response with "<|EOT|>"
<|Assistant|>
{Assistant}
```
The models that were used in this merger were as follow:
- https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct
- https://huggingface.co/codefuse-ai/CodeFuse-DeepSeek-33B
- https://huggingface.co/WizardLM/WizardCoder-33B-V1.1
Thank you to the creators of the above ai models, they have full credit for the EveryoneLLM series of models. Without their hard work we wouldnt be able to achieve the great success we have in the open source community. 💗
You can find the write up for merging models here:
https://docs.google.com/document/d/1_vOftBnrk9NRk5h10UqrfJ5CDih9KBKL61yvrZtVWPE/edit?usp=sharing
Config for the merger can be found bellow:
```yaml
models:
- model: WizardLM_WizardCoder-33B-V1.1
parameters:
density: 1
weight: .5
- model: codefuse-ai_CodeFuse-DeepSeek-33B
parameters:
density: 1
weight: .5
merge_method: ties
base_model: deepseek-ai_deepseek-coder-33b-instruct
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
DataVare/NSF-To-MBOX-Converter
|
DataVare
| 2024-01-25T10:59:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-25T10:59:00Z |
For effective NSF to MBOX email conversion, utilize DataVare NSF to MBOX Converter. With a few clicks, the entire NSF to MBOX conversion procedure is finished, and the folder structure in the MBOX format is preserved just as it was in the NSF format. It transfers Lotus Notes emails—including those with attachments—into Mailbox format without making any modifications. The user can accomplish the Lotus Notes to MBOX migration with the aid of its simple user interface. You don't need to be technically skilled or knowledgeable to use our NSF to MBOX Converter. Its basic yet helpful characteristics make the user's task easier.
With the use of its sophisticated capabilities, users can export all NSF email data in MBOX format without any modifications. Before obtaining a license key for the full NSF to MBOX Migration, check out its features and functions with the help of its free demo edition.
Read More:- https://www.datavare.com/software/nsf-to-mbox-converter-expert.html
|
HatimF/bart_booksum_1
|
HatimF
| 2024-01-25T10:57:57Z | 15 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-21T15:19:57Z |
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart_booksum_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart_booksum_1
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0264
- Rouge1: 0.1795
- Rouge2: 0.0368
- Rougel: 0.1337
- Rougelsum: 0.1334
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 1515
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 3.8176 | 1.0 | 80 | 3.0712 | 0.1765 | 0.0372 | 0.1298 | 0.1299 | 20.0 |
| 3.3835 | 2.0 | 160 | 3.0365 | 0.1871 | 0.0444 | 0.1382 | 0.1382 | 20.0 |
| 3.2162 | 3.0 | 240 | 3.0281 | 0.1892 | 0.0426 | 0.1385 | 0.1383 | 20.0 |
| 3.1147 | 4.0 | 320 | 3.0264 | 0.1795 | 0.0368 | 0.1337 | 0.1334 | 20.0 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Pavan-124/wines_multilingual
|
Pavan-124
| 2024-01-25T10:56:19Z | 44 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-25T10:56:01Z |
---
license: apache-2.0
base_model: distilbert-base-multilingual-cased
tags:
- generated_from_keras_callback
model-index:
- name: wines_multilingual
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# wines_multilingual
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1375
- Validation Loss: 0.0916
- Train Precision: 0.8566
- Train Recall: 0.8662
- Train F1: 0.8614
- Train Accuracy: 0.9618
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5724, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.1375 | 0.0916 | 0.8566 | 0.8662 | 0.8614 | 0.9618 | 0 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
unreal-hug/segformer-b0-finetuned-segments-ECHO-jan-25-v2
|
unreal-hug
| 2024-01-25T10:51:48Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2024-01-25T09:17:39Z |
---
license: other
base_model: nvidia/mit-b0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-ECHO-jan-25-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-ECHO-jan-25-v2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the unreal-hug/REAL_DATASET_SEG_401_6_lbls dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4155
- Mean Iou: 0.3349
- Mean Accuracy: 0.3935
- Overall Accuracy: 0.5591
- Accuracy Unlabeled: nan
- Accuracy Lv: 0.6815
- Accuracy Rv: 0.3865
- Accuracy Ra: 0.5805
- Accuracy La: 0.6544
- Accuracy Vs: 0.1155
- Accuracy As: nan
- Accuracy Mk: 0.0497
- Accuracy Tk: nan
- Accuracy Asd: 0.2779
- Accuracy Vsd: 0.3995
- Accuracy Ak: 0.3959
- Iou Unlabeled: 0.0
- Iou Lv: 0.6626
- Iou Rv: 0.3764
- Iou Ra: 0.5699
- Iou La: 0.6056
- Iou Vs: 0.1108
- Iou As: nan
- Iou Mk: 0.0485
- Iou Tk: nan
- Iou Asd: 0.2565
- Iou Vsd: 0.3465
- Iou Ak: 0.3718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Lv | Accuracy Rv | Accuracy Ra | Accuracy La | Accuracy Vs | Accuracy As | Accuracy Mk | Accuracy Tk | Accuracy Asd | Accuracy Vsd | Accuracy Ak | Iou Unlabeled | Iou Lv | Iou Rv | Iou Ra | Iou La | Iou Vs | Iou As | Iou Mk | Iou Tk | Iou Asd | Iou Vsd | Iou Ak |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:------------:|:------------:|:-----------:|:-------------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:-------:|:-------:|:------:|
| 2.0322 | 0.12 | 20 | 2.2124 | 0.0954 | 0.1885 | 0.3033 | nan | 0.3903 | 0.4680 | 0.0850 | 0.0173 | 0.0 | nan | 0.0011 | nan | 0.0 | 0.1087 | 0.6263 | 0.0 | 0.2970 | 0.2085 | 0.0782 | 0.0172 | 0.0 | nan | 0.0011 | 0.0 | 0.0 | 0.0823 | 0.3647 |
| 1.6027 | 0.25 | 40 | 1.5649 | 0.0789 | 0.1168 | 0.2640 | nan | 0.5149 | 0.0061 | 0.0264 | 0.0839 | 0.0 | nan | 0.0 | nan | 0.0001 | 0.0014 | 0.4180 | 0.0 | 0.3418 | 0.0061 | 0.0262 | 0.0787 | 0.0 | nan | 0.0 | nan | 0.0001 | 0.0014 | 0.3342 |
| 1.2877 | 0.38 | 60 | 1.2616 | 0.0943 | 0.1296 | 0.2685 | nan | 0.4665 | 0.0053 | 0.0547 | 0.2421 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0050 | 0.3930 | 0.0 | 0.3612 | 0.0053 | 0.0529 | 0.1877 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0050 | 0.3312 |
| 1.0981 | 0.5 | 80 | 1.2208 | 0.0967 | 0.1552 | 0.3898 | nan | 0.8151 | 0.0079 | 0.0082 | 0.0794 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.4863 | 0.0 | 0.4737 | 0.0079 | 0.0082 | 0.0750 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.4020 |
| 1.0235 | 0.62 | 100 | 0.9343 | 0.1218 | 0.1888 | 0.4419 | nan | 0.8508 | 0.0102 | 0.0423 | 0.3015 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.4947 | 0.0 | 0.5319 | 0.0101 | 0.0418 | 0.2283 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.4059 |
| 0.8977 | 0.75 | 120 | 0.7806 | 0.1592 | 0.2227 | 0.4764 | nan | 0.8124 | 0.1787 | 0.1188 | 0.4178 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.4763 | 0.0 | 0.6151 | 0.1741 | 0.1124 | 0.2995 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.3906 |
| 0.6932 | 0.88 | 140 | 0.6246 | 0.1262 | 0.1590 | 0.3766 | nan | 0.6794 | 0.2019 | 0.1415 | 0.2810 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.1276 | 0.0 | 0.5674 | 0.1941 | 0.1372 | 0.2414 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.1217 |
| 0.6168 | 1.0 | 160 | 0.6124 | 0.1752 | 0.2277 | 0.4717 | nan | 0.7500 | 0.3261 | 0.1491 | 0.4375 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.3864 | 0.0 | 0.6106 | 0.2973 | 0.1475 | 0.3522 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.3439 |
| 0.5758 | 1.12 | 180 | 0.5658 | 0.2037 | 0.2520 | 0.4750 | nan | 0.6646 | 0.3955 | 0.3596 | 0.4433 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.4048 | 0.0 | 0.6048 | 0.3705 | 0.3133 | 0.3865 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.3623 |
| 0.5081 | 1.25 | 200 | 0.5116 | 0.2316 | 0.2993 | 0.5280 | nan | 0.6460 | 0.4867 | 0.4741 | 0.6477 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.4396 | 0.0 | 0.6098 | 0.4523 | 0.3961 | 0.4611 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.3965 |
| 0.6351 | 1.38 | 220 | 0.4879 | 0.1127 | 0.1324 | 0.2609 | nan | 0.3749 | 0.0902 | 0.2601 | 0.3883 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.0783 | 0.0 | 0.3623 | 0.0897 | 0.2510 | 0.3466 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.0774 |
| 0.6241 | 1.5 | 240 | 0.4593 | 0.2439 | 0.3090 | 0.5686 | nan | 0.7439 | 0.4492 | 0.5367 | 0.6916 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.3597 | 0.0 | 0.6995 | 0.4322 | 0.4400 | 0.5265 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.3410 |
| 0.4315 | 1.62 | 260 | 0.4082 | 0.2175 | 0.2611 | 0.4948 | nan | 0.6811 | 0.3535 | 0.4253 | 0.5871 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.3025 | 0.0 | 0.6398 | 0.3459 | 0.3952 | 0.5052 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.2886 |
| 0.5236 | 1.75 | 280 | 0.4651 | 0.1063 | 0.1353 | 0.2191 | nan | 0.2161 | 0.0885 | 0.3687 | 0.4434 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.1015 | 0.0 | 0.2138 | 0.0884 | 0.3282 | 0.3313 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0 | 0.1012 |
| 0.3688 | 1.88 | 300 | 0.4279 | 0.2796 | 0.3459 | 0.6382 | nan | 0.8529 | 0.5705 | 0.5493 | 0.6449 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0051 | 0.4903 | 0.0 | 0.7546 | 0.5277 | 0.5044 | 0.5537 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0051 | 0.4500 |
| 0.3659 | 2.0 | 320 | 0.3907 | 0.1881 | 0.2192 | 0.4461 | nan | 0.7156 | 0.1476 | 0.3144 | 0.4135 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0006 | 0.3810 | 0.0 | 0.6851 | 0.1461 | 0.3012 | 0.3919 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0006 | 0.3560 |
| 0.3243 | 2.12 | 340 | 0.3846 | 0.2737 | 0.3272 | 0.5846 | nan | 0.7313 | 0.4747 | 0.6435 | 0.7038 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0010 | 0.3904 | 0.0 | 0.7045 | 0.4610 | 0.5733 | 0.6223 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0010 | 0.3752 |
| 0.4169 | 2.25 | 360 | 0.4099 | 0.1292 | 0.1475 | 0.2563 | nan | 0.3286 | 0.0968 | 0.3184 | 0.3088 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0005 | 0.2741 | 0.0 | 0.3241 | 0.0965 | 0.3061 | 0.2960 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0005 | 0.2685 |
| 0.2951 | 2.38 | 380 | 0.3583 | 0.2277 | 0.2701 | 0.4962 | nan | 0.6695 | 0.2136 | 0.5730 | 0.6784 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0442 | 0.2519 | 0.0 | 0.6409 | 0.2125 | 0.5347 | 0.5967 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0435 | 0.2488 |
| 0.3847 | 2.5 | 400 | 0.3565 | 0.2410 | 0.2843 | 0.5032 | nan | 0.6544 | 0.3067 | 0.5888 | 0.6409 | 0.0 | nan | 0.0 | nan | 0.0 | 0.1089 | 0.2594 | 0.0 | 0.6304 | 0.3023 | 0.5347 | 0.5853 | 0.0 | nan | 0.0 | nan | 0.0 | 0.1033 | 0.2535 |
| 0.339 | 2.62 | 420 | 0.3715 | 0.3085 | 0.3697 | 0.6227 | nan | 0.7530 | 0.5620 | 0.6411 | 0.6900 | 0.0 | nan | 0.0 | nan | 0.0015 | 0.1337 | 0.5460 | 0.0 | 0.7083 | 0.5347 | 0.5722 | 0.6160 | 0.0 | nan | 0.0 | nan | 0.0015 | 0.1261 | 0.5260 |
| 0.7318 | 2.75 | 440 | 0.3574 | 0.2478 | 0.2950 | 0.4525 | nan | 0.5247 | 0.2338 | 0.5171 | 0.6926 | 0.0 | nan | 0.0 | nan | 0.0097 | 0.3424 | 0.3350 | 0.0 | 0.5100 | 0.2322 | 0.4803 | 0.6174 | 0.0 | nan | 0.0 | nan | 0.0097 | 0.3048 | 0.3235 |
| 0.2905 | 2.88 | 460 | 0.3609 | 0.1903 | 0.2262 | 0.3935 | nan | 0.4734 | 0.1841 | 0.5925 | 0.5863 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0201 | 0.1799 | 0.0 | 0.4671 | 0.1834 | 0.5348 | 0.5192 | 0.0 | nan | 0.0 | nan | 0.0 | 0.0199 | 0.1786 |
| 0.3793 | 3.0 | 480 | 0.3452 | 0.2734 | 0.3213 | 0.5973 | nan | 0.8327 | 0.4635 | 0.5374 | 0.6168 | 0.0 | nan | 0.0 | nan | 0.0263 | 0.0746 | 0.3404 | 0.0 | 0.7723 | 0.4512 | 0.5139 | 0.5700 | 0.0 | nan | 0.0 | nan | 0.0260 | 0.0734 | 0.3270 |
| 0.3922 | 3.12 | 500 | 0.3695 | 0.2151 | 0.2604 | 0.3659 | nan | 0.2751 | 0.2847 | 0.6318 | 0.7206 | 0.0 | nan | 0.0 | nan | 0.0045 | 0.1409 | 0.2863 | 0.0 | 0.2726 | 0.2824 | 0.5652 | 0.6103 | 0.0 | nan | 0.0 | nan | 0.0045 | 0.1339 | 0.2824 |
| 0.3299 | 3.25 | 520 | 0.3326 | 0.3063 | 0.3610 | 0.6202 | nan | 0.8029 | 0.5001 | 0.5866 | 0.6558 | 0.0 | nan | 0.0 | nan | 0.0422 | 0.1575 | 0.5038 | 0.0 | 0.7639 | 0.4932 | 0.5461 | 0.5948 | 0.0 | nan | 0.0 | nan | 0.0416 | 0.1472 | 0.4762 |
| 0.2547 | 3.38 | 540 | 0.3323 | 0.2650 | 0.3121 | 0.5052 | nan | 0.6042 | 0.4311 | 0.6044 | 0.5282 | 0.0 | nan | 0.0 | nan | 0.0813 | 0.1438 | 0.4164 | 0.0 | 0.5882 | 0.4189 | 0.5254 | 0.5006 | 0.0 | nan | 0.0 | nan | 0.0801 | 0.1359 | 0.4010 |
| 0.2154 | 3.5 | 560 | 0.3211 | 0.2903 | 0.3397 | 0.5796 | nan | 0.7327 | 0.4341 | 0.6265 | 0.6269 | 0.0 | nan | 0.0 | nan | 0.0340 | 0.1079 | 0.4955 | 0.0 | 0.7034 | 0.4304 | 0.5828 | 0.5874 | 0.0 | nan | 0.0 | nan | 0.0337 | 0.1022 | 0.4634 |
| 0.3146 | 3.62 | 580 | 0.3642 | 0.3096 | 0.3854 | 0.5967 | nan | 0.6732 | 0.4518 | 0.7254 | 0.8100 | 0.0 | nan | 0.0 | nan | 0.1293 | 0.2673 | 0.4116 | 0.0 | 0.6557 | 0.4444 | 0.5843 | 0.6517 | 0.0 | nan | 0.0 | nan | 0.1212 | 0.2434 | 0.3957 |
| 0.2216 | 3.75 | 600 | 0.3178 | 0.3241 | 0.3818 | 0.5998 | nan | 0.7614 | 0.4294 | 0.5415 | 0.7168 | 0.0 | nan | 0.0 | nan | 0.1378 | 0.4248 | 0.4242 | 0.0 | 0.7254 | 0.4212 | 0.5274 | 0.6520 | 0.0 | nan | 0.0 | nan | 0.1338 | 0.3687 | 0.4125 |
| 0.2973 | 3.88 | 620 | 0.3199 | 0.3486 | 0.4127 | 0.6217 | nan | 0.7369 | 0.5178 | 0.5558 | 0.7739 | 0.0 | nan | 0.0 | nan | 0.1965 | 0.4456 | 0.4876 | 0.0 | 0.7072 | 0.4974 | 0.5407 | 0.7010 | 0.0 | nan | 0.0 | nan | 0.1859 | 0.3845 | 0.4692 |
| 0.2434 | 4.0 | 640 | 0.3179 | 0.3415 | 0.4057 | 0.6154 | nan | 0.7161 | 0.4582 | 0.6827 | 0.7445 | 0.0 | nan | 0.0 | nan | 0.1295 | 0.3827 | 0.5376 | 0.0 | 0.6869 | 0.4483 | 0.6280 | 0.6769 | 0.0 | nan | 0.0 | nan | 0.1254 | 0.3360 | 0.5134 |
| 0.2283 | 4.12 | 660 | 0.3310 | 0.2584 | 0.3073 | 0.5076 | nan | 0.6237 | 0.2267 | 0.6673 | 0.7014 | 0.0 | nan | 0.0 | nan | 0.0718 | 0.1288 | 0.3464 | 0.0 | 0.6078 | 0.2260 | 0.5912 | 0.6270 | 0.0 | nan | 0.0 | nan | 0.0707 | 0.1212 | 0.3401 |
| 0.6263 | 4.25 | 680 | 0.3153 | 0.2947 | 0.3436 | 0.5198 | nan | 0.6461 | 0.2824 | 0.5694 | 0.6236 | 0.0 | nan | 0.0 | nan | 0.1512 | 0.3950 | 0.4248 | 0.0 | 0.6244 | 0.2811 | 0.5498 | 0.5852 | 0.0 | nan | 0.0 | nan | 0.1449 | 0.3479 | 0.4140 |
| 0.1708 | 4.38 | 700 | 0.3173 | 0.2957 | 0.3435 | 0.5834 | nan | 0.7531 | 0.3902 | 0.5853 | 0.7153 | 0.0 | nan | 0.0 | nan | 0.1331 | 0.1239 | 0.3908 | 0.0 | 0.7188 | 0.3869 | 0.5656 | 0.6612 | 0.0 | nan | 0.0 | nan | 0.1298 | 0.1159 | 0.3788 |
| 0.246 | 4.5 | 720 | 0.3138 | 0.2570 | 0.2955 | 0.5052 | nan | 0.6686 | 0.3447 | 0.4552 | 0.5989 | 0.0 | nan | 0.0 | nan | 0.1237 | 0.1230 | 0.3459 | 0.0 | 0.6469 | 0.3416 | 0.4442 | 0.5640 | 0.0 | nan | 0.0 | nan | 0.1213 | 0.1159 | 0.3366 |
| 0.3876 | 4.62 | 740 | 0.3084 | 0.3646 | 0.4336 | 0.6321 | nan | 0.7367 | 0.4776 | 0.6536 | 0.7478 | 0.0 | nan | 0.0 | nan | 0.2351 | 0.4973 | 0.5539 | 0.0 | 0.7108 | 0.4700 | 0.6166 | 0.6824 | 0.0 | nan | 0.0 | nan | 0.2208 | 0.4179 | 0.5274 |
| 0.4766 | 4.75 | 760 | 0.3251 | 0.2509 | 0.2894 | 0.4716 | nan | 0.6095 | 0.3498 | 0.4348 | 0.4989 | 0.0 | nan | 0.0 | nan | 0.1119 | 0.2020 | 0.3972 | 0.0 | 0.5954 | 0.3434 | 0.4157 | 0.4754 | 0.0 | nan | 0.0 | nan | 0.1092 | 0.1836 | 0.3865 |
| 0.4431 | 4.88 | 780 | 0.3052 | 0.3104 | 0.3654 | 0.5781 | nan | 0.7024 | 0.4862 | 0.5150 | 0.7363 | 0.0 | nan | 0.0 | nan | 0.2163 | 0.2456 | 0.3866 | 0.0 | 0.6801 | 0.4736 | 0.5039 | 0.6461 | 0.0 | nan | 0.0 | nan | 0.2021 | 0.2209 | 0.3771 |
| 0.2319 | 5.0 | 800 | 0.3104 | 0.3316 | 0.3938 | 0.5875 | nan | 0.6790 | 0.5433 | 0.6711 | 0.6572 | 0.0 | nan | 0.0 | nan | 0.2908 | 0.3349 | 0.3675 | 0.0 | 0.6628 | 0.5265 | 0.5922 | 0.6113 | 0.0 | nan | 0.0 | nan | 0.2656 | 0.2987 | 0.3585 |
| 0.3361 | 5.12 | 820 | 0.3125 | 0.3219 | 0.3808 | 0.5905 | nan | 0.7234 | 0.3480 | 0.6026 | 0.7698 | 0.0 | nan | 0.0 | nan | 0.1558 | 0.3461 | 0.4818 | 0.0 | 0.7028 | 0.3453 | 0.5677 | 0.6877 | 0.0 | nan | 0.0 | nan | 0.1498 | 0.3077 | 0.4584 |
| 0.412 | 5.25 | 840 | 0.3477 | 0.2427 | 0.2810 | 0.4099 | nan | 0.3970 | 0.2768 | 0.5965 | 0.6918 | 0.0 | nan | 0.0 | nan | 0.1238 | 0.1614 | 0.2814 | 0.0 | 0.3899 | 0.2752 | 0.5703 | 0.6446 | 0.0 | nan | 0.0 | nan | 0.1208 | 0.1494 | 0.2767 |
| 0.1799 | 5.38 | 860 | 0.3132 | 0.3444 | 0.4035 | 0.6161 | nan | 0.7651 | 0.4804 | 0.6609 | 0.5953 | 0.0 | nan | 0.0 | nan | 0.2167 | 0.3901 | 0.5227 | 0.0 | 0.7389 | 0.4759 | 0.6240 | 0.5643 | 0.0 | nan | 0.0 | nan | 0.2033 | 0.3414 | 0.4965 |
| 0.1716 | 5.5 | 880 | 0.3186 | 0.2805 | 0.3289 | 0.4955 | nan | 0.5685 | 0.3185 | 0.5785 | 0.6888 | 0.0 | nan | 0.0 | nan | 0.1934 | 0.2548 | 0.3577 | 0.0 | 0.5592 | 0.3165 | 0.5487 | 0.6271 | 0.0 | nan | 0.0 | nan | 0.1798 | 0.2295 | 0.3441 |
| 0.4388 | 5.62 | 900 | 0.3171 | 0.4098 | 0.4914 | 0.7348 | nan | 0.8788 | 0.6109 | 0.7475 | 0.7979 | 0.0 | nan | 0.0 | nan | 0.2607 | 0.5234 | 0.6032 | 0.0 | 0.8320 | 0.5956 | 0.6842 | 0.7286 | 0.0 | nan | 0.0 | nan | 0.2445 | 0.4481 | 0.5648 |
| 0.2632 | 5.75 | 920 | 0.3163 | 0.2697 | 0.3130 | 0.4551 | nan | 0.5145 | 0.3445 | 0.4613 | 0.6042 | 0.0 | nan | 0.0 | nan | 0.1180 | 0.3896 | 0.3846 | 0.0 | 0.5045 | 0.3404 | 0.4568 | 0.5718 | 0.0 | nan | 0.0 | nan | 0.1141 | 0.3416 | 0.3677 |
| 0.3024 | 5.88 | 940 | 0.3063 | 0.3883 | 0.4645 | 0.6758 | nan | 0.7778 | 0.6624 | 0.7137 | 0.7013 | 0.0 | nan | 0.0 | nan | 0.3253 | 0.4950 | 0.5054 | 0.0 | 0.7479 | 0.6323 | 0.6434 | 0.6506 | 0.0 | nan | 0.0 | nan | 0.2919 | 0.4251 | 0.4919 |
| 0.2551 | 6.0 | 960 | 0.3489 | 0.2261 | 0.2625 | 0.4024 | nan | 0.5011 | 0.1084 | 0.4074 | 0.6709 | 0.0 | nan | 0.0 | nan | 0.1666 | 0.2592 | 0.2489 | 0.0 | 0.4903 | 0.1081 | 0.4042 | 0.6181 | 0.0 | nan | 0.0 | nan | 0.1562 | 0.2369 | 0.2470 |
| 0.3281 | 6.12 | 980 | 0.2939 | 0.3635 | 0.4275 | 0.6116 | nan | 0.6803 | 0.5665 | 0.6418 | 0.6806 | 0.0 | nan | 0.0 | nan | 0.2545 | 0.4660 | 0.5579 | 0.0 | 0.6659 | 0.5508 | 0.6049 | 0.6359 | 0.0 | nan | 0.0 | nan | 0.2372 | 0.4071 | 0.5330 |
| 0.1372 | 6.25 | 1000 | 0.2998 | 0.3755 | 0.4413 | 0.6450 | nan | 0.7530 | 0.5417 | 0.6673 | 0.7023 | 0.0 | nan | 0.0 | nan | 0.2979 | 0.4450 | 0.5648 | 0.0 | 0.7287 | 0.5300 | 0.6359 | 0.6582 | 0.0 | nan | 0.0 | nan | 0.2737 | 0.3899 | 0.5389 |
| 0.3485 | 6.38 | 1020 | 0.3398 | 0.2557 | 0.2941 | 0.4515 | nan | 0.5305 | 0.2957 | 0.5158 | 0.6263 | 0.0 | nan | 0.0 | nan | 0.1937 | 0.2043 | 0.2811 | 0.0 | 0.5206 | 0.2931 | 0.5070 | 0.5912 | 0.0 | nan | 0.0 | nan | 0.1833 | 0.1846 | 0.2769 |
| 0.3755 | 6.5 | 1040 | 0.3034 | 0.3526 | 0.4160 | 0.5795 | nan | 0.6346 | 0.4564 | 0.7050 | 0.6986 | 0.0 | nan | 0.0 | nan | 0.3010 | 0.4881 | 0.4598 | 0.0 | 0.6188 | 0.4509 | 0.6639 | 0.6554 | 0.0 | nan | 0.0 | nan | 0.2758 | 0.4166 | 0.4442 |
| 0.2617 | 6.62 | 1060 | 0.3166 | 0.2905 | 0.3384 | 0.4820 | nan | 0.5401 | 0.3225 | 0.5981 | 0.6153 | 0.0 | nan | 0.0 | nan | 0.1880 | 0.4060 | 0.3756 | 0.0 | 0.5337 | 0.3212 | 0.5770 | 0.5817 | 0.0 | nan | 0.0 | nan | 0.1774 | 0.3511 | 0.3627 |
| 0.2937 | 6.75 | 1080 | 0.3090 | 0.3864 | 0.4585 | 0.7031 | nan | 0.8093 | 0.6687 | 0.7189 | 0.7808 | 0.0 | nan | 0.0 | nan | 0.3271 | 0.2123 | 0.6092 | 0.0 | 0.7810 | 0.6460 | 0.6639 | 0.7094 | 0.0 | nan | 0.0 | nan | 0.2942 | 0.1957 | 0.5738 |
| 0.3588 | 6.88 | 1100 | 0.3011 | 0.3653 | 0.4310 | 0.6482 | nan | 0.8132 | 0.4360 | 0.6549 | 0.7523 | 0.0 | nan | 0.0 | nan | 0.3123 | 0.4840 | 0.4267 | 0.0 | 0.7784 | 0.4310 | 0.6235 | 0.6944 | 0.0 | nan | 0.0 | nan | 0.2884 | 0.4223 | 0.4149 |
| 0.1613 | 7.0 | 1120 | 0.3302 | 0.2838 | 0.3344 | 0.4622 | nan | 0.4687 | 0.3133 | 0.6402 | 0.6782 | 0.0 | nan | 0.0 | nan | 0.1601 | 0.4001 | 0.3490 | 0.0 | 0.4620 | 0.3100 | 0.5933 | 0.6247 | 0.0 | nan | 0.0 | nan | 0.1525 | 0.3558 | 0.3400 |
| 0.4217 | 7.12 | 1140 | 0.3087 | 0.3723 | 0.4451 | 0.6405 | nan | 0.7463 | 0.4361 | 0.7284 | 0.7778 | 0.0 | nan | 0.0 | nan | 0.3229 | 0.4981 | 0.4960 | 0.0 | 0.7258 | 0.4307 | 0.6602 | 0.7060 | 0.0 | nan | 0.0 | nan | 0.2946 | 0.4255 | 0.4800 |
| 0.1429 | 7.25 | 1160 | 0.3227 | 0.2794 | 0.3221 | 0.5335 | nan | 0.7381 | 0.3377 | 0.5059 | 0.5342 | 0.0 | nan | 0.0 | nan | 0.2719 | 0.1916 | 0.3198 | 0.0 | 0.7147 | 0.3361 | 0.4943 | 0.5100 | 0.0 | nan | 0.0 | nan | 0.2479 | 0.1761 | 0.3151 |
| 0.227 | 7.38 | 1180 | 0.3087 | 0.3749 | 0.4471 | 0.6221 | nan | 0.6645 | 0.5048 | 0.7103 | 0.7872 | 0.0 | nan | 0.0 | nan | 0.3502 | 0.4474 | 0.5594 | 0.0 | 0.6499 | 0.4966 | 0.6631 | 0.7065 | 0.0 | nan | 0.0 | nan | 0.3168 | 0.3915 | 0.5250 |
| 0.3733 | 7.5 | 1200 | 0.3304 | 0.2777 | 0.3229 | 0.4832 | nan | 0.5603 | 0.3886 | 0.5612 | 0.5532 | 0.0 | nan | 0.0 | nan | 0.1744 | 0.2915 | 0.3773 | 0.0 | 0.5501 | 0.3824 | 0.5434 | 0.5248 | 0.0 | nan | 0.0 | nan | 0.1655 | 0.2561 | 0.3542 |
| 0.3148 | 7.62 | 1220 | 0.3075 | 0.3787 | 0.4500 | 0.6531 | nan | 0.7425 | 0.5909 | 0.7189 | 0.7270 | 0.0 | nan | 0.0 | nan | 0.3257 | 0.4420 | 0.5030 | 0.0 | 0.7152 | 0.5687 | 0.6673 | 0.6700 | 0.0 | nan | 0.0 | nan | 0.2976 | 0.3823 | 0.4863 |
| 0.22 | 7.75 | 1240 | 0.3156 | 0.3340 | 0.3934 | 0.5589 | nan | 0.6127 | 0.4262 | 0.6387 | 0.7554 | 0.0 | nan | 0.0 | nan | 0.2170 | 0.4800 | 0.4108 | 0.0 | 0.6002 | 0.4203 | 0.6121 | 0.6848 | 0.0 | nan | 0.0 | nan | 0.2073 | 0.4168 | 0.3984 |
| 0.499 | 7.88 | 1260 | 0.3085 | 0.3454 | 0.4092 | 0.6278 | nan | 0.7534 | 0.4363 | 0.7109 | 0.7256 | 0.0 | nan | 0.0 | nan | 0.2525 | 0.2773 | 0.5267 | 0.0 | 0.7296 | 0.4311 | 0.6556 | 0.6668 | 0.0 | nan | 0.0 | nan | 0.2355 | 0.2512 | 0.4848 |
| 0.2604 | 8.0 | 1280 | 0.3123 | 0.3504 | 0.4118 | 0.6089 | nan | 0.7175 | 0.4779 | 0.6668 | 0.6668 | 0.0 | nan | 0.0 | nan | 0.2368 | 0.4184 | 0.5218 | 0.0 | 0.6949 | 0.4708 | 0.6402 | 0.6225 | 0.0 | nan | 0.0 | nan | 0.2192 | 0.3649 | 0.4915 |
| 0.146 | 8.12 | 1300 | 0.3274 | 0.3036 | 0.3526 | 0.5368 | nan | 0.6413 | 0.4378 | 0.6158 | 0.5674 | 0.0 | nan | 0.0 | nan | 0.1757 | 0.3165 | 0.4191 | 0.0 | 0.6216 | 0.4333 | 0.5946 | 0.5339 | 0.0 | nan | 0.0 | nan | 0.1697 | 0.2810 | 0.4015 |
| 0.1103 | 8.25 | 1320 | 0.3339 | 0.2738 | 0.3155 | 0.4762 | nan | 0.5556 | 0.4026 | 0.5015 | 0.5408 | 0.0 | nan | 0.0 | nan | 0.1370 | 0.2938 | 0.4084 | 0.0 | 0.5454 | 0.3969 | 0.4921 | 0.5111 | 0.0 | nan | 0.0 | nan | 0.1330 | 0.2658 | 0.3939 |
| 0.1323 | 8.38 | 1340 | 0.3179 | 0.3304 | 0.3865 | 0.5986 | nan | 0.7334 | 0.4769 | 0.6287 | 0.6839 | 0.0 | nan | 0.0 | nan | 0.2152 | 0.3473 | 0.3933 | 0.0 | 0.7022 | 0.4679 | 0.6091 | 0.6261 | 0.0 | nan | 0.0 | nan | 0.2049 | 0.3080 | 0.3863 |
| 0.1057 | 8.5 | 1360 | 0.4118 | 0.1949 | 0.2242 | 0.3076 | nan | 0.3007 | 0.1559 | 0.4492 | 0.5135 | 0.0 | nan | 0.0 | nan | 0.1282 | 0.2228 | 0.2472 | 0.0 | 0.2972 | 0.1552 | 0.4441 | 0.4845 | 0.0 | nan | 0.0 | nan | 0.1241 | 0.2028 | 0.2412 |
| 0.1248 | 8.62 | 1380 | 0.3228 | 0.4155 | 0.4991 | 0.7269 | nan | 0.8415 | 0.6462 | 0.7622 | 0.7778 | 0.0 | nan | 0.0 | nan | 0.3683 | 0.4851 | 0.6106 | 0.0 | 0.8100 | 0.6271 | 0.6994 | 0.7057 | 0.0 | nan | 0.0 | nan | 0.3280 | 0.4172 | 0.5674 |
| 0.1165 | 8.75 | 1400 | 0.3307 | 0.2995 | 0.3485 | 0.5194 | nan | 0.6149 | 0.3102 | 0.5592 | 0.6835 | 0.0 | nan | 0.0 | nan | 0.1857 | 0.3513 | 0.4318 | 0.0 | 0.5982 | 0.3074 | 0.5485 | 0.6349 | 0.0 | nan | 0.0 | nan | 0.1765 | 0.3142 | 0.4156 |
| 0.2999 | 8.88 | 1420 | 0.3766 | 0.2329 | 0.2673 | 0.3927 | nan | 0.4223 | 0.2954 | 0.4437 | 0.5620 | 0.0 | nan | 0.0 | nan | 0.1641 | 0.1857 | 0.3327 | 0.0 | 0.4163 | 0.2897 | 0.4411 | 0.5318 | 0.0 | nan | 0.0 | nan | 0.1570 | 0.1704 | 0.3232 |
| 0.2005 | 9.0 | 1440 | 0.3224 | 0.3457 | 0.4100 | 0.5800 | nan | 0.6510 | 0.4458 | 0.6701 | 0.6765 | 0.0 | nan | 0.0002 | nan | 0.2510 | 0.4869 | 0.5085 | 0.0 | 0.6327 | 0.4381 | 0.6441 | 0.6348 | 0.0 | nan | 0.0002 | nan | 0.2321 | 0.4092 | 0.4656 |
| 0.0952 | 9.12 | 1460 | 0.3368 | 0.2986 | 0.3475 | 0.5230 | nan | 0.5872 | 0.4686 | 0.6328 | 0.6070 | 0.0 | nan | 0.0 | nan | 0.2227 | 0.2252 | 0.3843 | 0.0 | 0.5748 | 0.4534 | 0.5986 | 0.5720 | 0.0 | nan | 0.0 | nan | 0.2114 | 0.2061 | 0.3692 |
| 0.3493 | 9.25 | 1480 | 0.3637 | 0.2527 | 0.2909 | 0.4285 | nan | 0.5072 | 0.3061 | 0.4793 | 0.5032 | 0.0 | nan | 0.0 | nan | 0.1539 | 0.3356 | 0.3325 | 0.0 | 0.4971 | 0.3025 | 0.4757 | 0.4801 | 0.0 | nan | 0.0 | nan | 0.1476 | 0.2993 | 0.3245 |
| 0.6102 | 9.38 | 1500 | 0.3302 | 0.3325 | 0.3885 | 0.5757 | nan | 0.6527 | 0.5490 | 0.6148 | 0.6472 | 0.0 | nan | 0.0 | nan | 0.2155 | 0.3649 | 0.4522 | 0.0 | 0.6350 | 0.5320 | 0.5993 | 0.6117 | 0.0 | nan | 0.0 | nan | 0.2015 | 0.3228 | 0.4231 |
| 0.1355 | 9.5 | 1520 | 0.3136 | 0.3397 | 0.3985 | 0.5917 | nan | 0.7078 | 0.4385 | 0.6197 | 0.6754 | 0.0 | nan | 0.0 | nan | 0.2411 | 0.4128 | 0.4908 | 0.0 | 0.6904 | 0.4342 | 0.6004 | 0.6272 | 0.0 | nan | 0.0 | nan | 0.2228 | 0.3615 | 0.4608 |
| 0.2828 | 9.62 | 1540 | 0.3214 | 0.3632 | 0.4329 | 0.6220 | nan | 0.7412 | 0.4203 | 0.6106 | 0.7371 | 0.0 | nan | 0.0004 | nan | 0.3513 | 0.4797 | 0.5558 | 0.0 | 0.7149 | 0.4099 | 0.5925 | 0.6724 | 0.0 | nan | 0.0004 | nan | 0.3130 | 0.4125 | 0.5160 |
| 0.2499 | 9.75 | 1560 | 0.3470 | 0.3178 | 0.3744 | 0.5370 | nan | 0.6135 | 0.4310 | 0.6613 | 0.5944 | 0.0 | nan | 0.0001 | nan | 0.3109 | 0.3698 | 0.3889 | 0.0 | 0.5963 | 0.4247 | 0.6161 | 0.5602 | 0.0 | nan | 0.0001 | nan | 0.2767 | 0.3282 | 0.3756 |
| 0.3973 | 9.88 | 1580 | 0.3292 | 0.3557 | 0.4222 | 0.6036 | nan | 0.6854 | 0.5253 | 0.6598 | 0.6929 | 0.0 | nan | 0.0 | nan | 0.2791 | 0.5040 | 0.4535 | 0.0 | 0.6632 | 0.5042 | 0.6184 | 0.6434 | 0.0 | nan | 0.0 | nan | 0.2550 | 0.4358 | 0.4374 |
| 0.1764 | 10.0 | 1600 | 0.3317 | 0.3493 | 0.4150 | 0.5984 | nan | 0.6897 | 0.4370 | 0.6896 | 0.7262 | 0.0 | nan | 0.0023 | nan | 0.2221 | 0.5208 | 0.4472 | 0.0 | 0.6702 | 0.4293 | 0.6340 | 0.6649 | 0.0 | nan | 0.0023 | nan | 0.2127 | 0.4505 | 0.4294 |
| 0.3667 | 10.12 | 1620 | 0.3224 | 0.3385 | 0.3978 | 0.5951 | nan | 0.7262 | 0.4129 | 0.5865 | 0.6864 | 0.0 | nan | 0.0041 | nan | 0.2466 | 0.4033 | 0.5143 | 0.0 | 0.7033 | 0.4053 | 0.5673 | 0.6383 | 0.0 | nan | 0.0041 | nan | 0.2340 | 0.3562 | 0.4768 |
| 0.2782 | 10.25 | 1640 | 0.3243 | 0.3675 | 0.4383 | 0.6355 | nan | 0.7304 | 0.5449 | 0.7001 | 0.7129 | 0.0 | nan | 0.0021 | nan | 0.2615 | 0.5006 | 0.4920 | 0.0 | 0.7054 | 0.5218 | 0.6534 | 0.6531 | 0.0 | nan | 0.0021 | nan | 0.2478 | 0.4247 | 0.4667 |
| 0.1716 | 10.38 | 1660 | 0.3199 | 0.3531 | 0.4131 | 0.6228 | nan | 0.7347 | 0.5687 | 0.6019 | 0.7168 | 0.0 | nan | 0.0001 | nan | 0.2756 | 0.3677 | 0.4526 | 0.0 | 0.7107 | 0.5562 | 0.5817 | 0.6609 | 0.0 | nan | 0.0001 | nan | 0.2583 | 0.3276 | 0.4354 |
| 0.1938 | 10.5 | 1680 | 0.3304 | 0.3369 | 0.4038 | 0.5768 | nan | 0.6403 | 0.3871 | 0.6906 | 0.7228 | 0.0 | nan | 0.0049 | nan | 0.2373 | 0.4005 | 0.5506 | 0.0 | 0.6248 | 0.3827 | 0.6360 | 0.6713 | 0.0 | nan | 0.0049 | nan | 0.2232 | 0.3537 | 0.4722 |
| 0.0939 | 10.62 | 1700 | 0.3178 | 0.3848 | 0.4610 | 0.6472 | nan | 0.7358 | 0.5401 | 0.7080 | 0.7452 | 0.0 | nan | 0.0062 | nan | 0.3044 | 0.6229 | 0.4866 | 0.0 | 0.7137 | 0.5308 | 0.6585 | 0.6909 | 0.0 | nan | 0.0062 | nan | 0.2798 | 0.5091 | 0.4593 |
| 0.1592 | 10.75 | 1720 | 0.3323 | 0.3312 | 0.3861 | 0.5834 | nan | 0.6894 | 0.6042 | 0.5845 | 0.5805 | 0.0 | nan | 0.0055 | nan | 0.2218 | 0.3658 | 0.4236 | 0.0 | 0.6704 | 0.5786 | 0.5710 | 0.5451 | 0.0 | nan | 0.0055 | nan | 0.2094 | 0.3236 | 0.4085 |
| 0.186 | 10.88 | 1740 | 0.3280 | 0.3838 | 0.4597 | 0.6407 | nan | 0.7285 | 0.4968 | 0.6695 | 0.7578 | 0.0 | nan | 0.0145 | nan | 0.4074 | 0.5172 | 0.5460 | 0.0 | 0.7056 | 0.4866 | 0.6377 | 0.6958 | 0.0 | nan | 0.0144 | nan | 0.3540 | 0.4358 | 0.5086 |
| 0.124 | 11.0 | 1760 | 0.4089 | 0.2396 | 0.2751 | 0.4128 | nan | 0.5165 | 0.2682 | 0.4594 | 0.4520 | 0.0 | nan | 0.0095 | nan | 0.2177 | 0.2646 | 0.2879 | 0.0 | 0.5080 | 0.2658 | 0.4487 | 0.4315 | 0.0 | nan | 0.0095 | nan | 0.2031 | 0.2473 | 0.2822 |
| 0.1084 | 11.12 | 1780 | 0.3512 | 0.3283 | 0.3864 | 0.5415 | nan | 0.5910 | 0.4464 | 0.6008 | 0.6724 | 0.0 | nan | 0.0105 | nan | 0.2345 | 0.4593 | 0.4626 | 0.0 | 0.5774 | 0.4313 | 0.5885 | 0.6218 | 0.0 | nan | 0.0105 | nan | 0.2220 | 0.3926 | 0.4390 |
| 0.3364 | 11.25 | 1800 | 0.3514 | 0.3245 | 0.3806 | 0.5425 | nan | 0.6046 | 0.4578 | 0.6162 | 0.6777 | 0.0 | nan | 0.0107 | nan | 0.2340 | 0.4572 | 0.3669 | 0.0 | 0.5876 | 0.4430 | 0.6018 | 0.6301 | 0.0 | nan | 0.0107 | nan | 0.2216 | 0.3951 | 0.3547 |
| 0.186 | 11.38 | 1820 | 0.3398 | 0.3337 | 0.3937 | 0.5743 | nan | 0.6718 | 0.3964 | 0.6547 | 0.6960 | 0.0 | nan | 0.0095 | nan | 0.2726 | 0.3967 | 0.4452 | 0.0 | 0.6555 | 0.3905 | 0.6240 | 0.6329 | 0.0 | nan | 0.0095 | nan | 0.2543 | 0.3500 | 0.4209 |
| 0.085 | 11.5 | 1840 | 0.3395 | 0.3541 | 0.4172 | 0.5969 | nan | 0.6834 | 0.4750 | 0.6591 | 0.6675 | 0.0029 | nan | 0.0163 | nan | 0.2791 | 0.4399 | 0.5321 | 0.0 | 0.6637 | 0.4648 | 0.6337 | 0.6201 | 0.0029 | nan | 0.0161 | nan | 0.2594 | 0.3801 | 0.5002 |
| 0.2861 | 11.62 | 1860 | 0.3575 | 0.3064 | 0.3548 | 0.5451 | nan | 0.6531 | 0.4069 | 0.6701 | 0.6340 | 0.0000 | nan | 0.0111 | nan | 0.2071 | 0.2802 | 0.3303 | 0.0 | 0.6370 | 0.4012 | 0.6428 | 0.5936 | 0.0000 | nan | 0.0110 | nan | 0.1952 | 0.2599 | 0.3230 |
| 0.2855 | 11.75 | 1880 | 0.3932 | 0.2415 | 0.2791 | 0.4276 | nan | 0.5475 | 0.2912 | 0.4359 | 0.4478 | 0.0 | nan | 0.0104 | nan | 0.1519 | 0.3073 | 0.3201 | 0.0 | 0.5356 | 0.2855 | 0.4203 | 0.4307 | 0.0 | nan | 0.0104 | nan | 0.1446 | 0.2782 | 0.3099 |
| 0.1548 | 11.88 | 1900 | 0.3398 | 0.3513 | 0.4167 | 0.6004 | nan | 0.7097 | 0.3884 | 0.6305 | 0.7354 | 0.0002 | nan | 0.0138 | nan | 0.2828 | 0.4755 | 0.5141 | 0.0 | 0.6920 | 0.3832 | 0.6074 | 0.6691 | 0.0002 | nan | 0.0138 | nan | 0.2669 | 0.4032 | 0.4775 |
| 0.123 | 12.0 | 1920 | 0.3279 | 0.3495 | 0.4113 | 0.6009 | nan | 0.7406 | 0.4218 | 0.5964 | 0.6630 | 0.0028 | nan | 0.0183 | nan | 0.2751 | 0.5053 | 0.4784 | 0.0 | 0.7190 | 0.4153 | 0.5867 | 0.6196 | 0.0028 | nan | 0.0181 | nan | 0.2542 | 0.4223 | 0.4568 |
| 0.1138 | 12.12 | 1940 | 0.3308 | 0.3983 | 0.4753 | 0.6792 | nan | 0.7682 | 0.5832 | 0.7082 | 0.8045 | 0.0028 | nan | 0.0209 | nan | 0.3537 | 0.4880 | 0.5477 | 0.0 | 0.7424 | 0.5667 | 0.6711 | 0.7307 | 0.0028 | nan | 0.0207 | nan | 0.3242 | 0.4171 | 0.5076 |
| 0.1582 | 12.25 | 1960 | 0.3342 | 0.3720 | 0.4410 | 0.6306 | nan | 0.7554 | 0.4633 | 0.6444 | 0.7011 | 0.0062 | nan | 0.0198 | nan | 0.3419 | 0.5214 | 0.5155 | 0.0 | 0.7309 | 0.4556 | 0.6269 | 0.6504 | 0.0062 | nan | 0.0196 | nan | 0.3087 | 0.4391 | 0.4823 |
| 0.3449 | 12.38 | 1980 | 0.3976 | 0.2429 | 0.2797 | 0.4225 | nan | 0.5187 | 0.2813 | 0.4234 | 0.5055 | 0.0 | nan | 0.0111 | nan | 0.1548 | 0.2818 | 0.3408 | 0.0 | 0.5069 | 0.2772 | 0.4172 | 0.4855 | 0.0 | nan | 0.0111 | nan | 0.1490 | 0.2573 | 0.3246 |
| 0.0296 | 12.5 | 2000 | 0.3332 | 0.3525 | 0.4166 | 0.5979 | nan | 0.6804 | 0.4709 | 0.6924 | 0.6988 | 0.0 | nan | 0.0172 | nan | 0.3403 | 0.3831 | 0.4662 | 0.0 | 0.6615 | 0.4562 | 0.6530 | 0.6470 | 0.0 | nan | 0.0171 | nan | 0.3043 | 0.3389 | 0.4466 |
| 0.2308 | 12.62 | 2020 | 0.3418 | 0.3574 | 0.4208 | 0.6123 | nan | 0.7446 | 0.3992 | 0.6491 | 0.7142 | 0.0 | nan | 0.0130 | nan | 0.3085 | 0.4940 | 0.4650 | 0.0 | 0.7205 | 0.3948 | 0.6289 | 0.6646 | 0.0 | nan | 0.0129 | nan | 0.2849 | 0.4238 | 0.4435 |
| 0.3632 | 12.75 | 2040 | 0.3847 | 0.2694 | 0.3118 | 0.4605 | nan | 0.5391 | 0.3475 | 0.5106 | 0.5736 | 0.0 | nan | 0.0090 | nan | 0.1930 | 0.3284 | 0.3053 | 0.0 | 0.5278 | 0.3411 | 0.5017 | 0.5453 | 0.0 | nan | 0.0089 | nan | 0.1846 | 0.2916 | 0.2930 |
| 0.284 | 12.88 | 2060 | 0.3425 | 0.3836 | 0.4597 | 0.6559 | nan | 0.7536 | 0.5838 | 0.6914 | 0.7272 | 0.0094 | nan | 0.0148 | nan | 0.3443 | 0.5142 | 0.4986 | 0.0 | 0.7278 | 0.5383 | 0.6459 | 0.6707 | 0.0094 | nan | 0.0147 | nan | 0.3187 | 0.4375 | 0.4732 |
| 0.1566 | 13.0 | 2080 | 0.3586 | 0.3093 | 0.3628 | 0.5439 | nan | 0.6643 | 0.4048 | 0.5981 | 0.6177 | 0.0075 | nan | 0.0124 | nan | 0.2639 | 0.3493 | 0.3475 | 0.0 | 0.6486 | 0.3917 | 0.5754 | 0.5727 | 0.0075 | nan | 0.0123 | nan | 0.2477 | 0.3058 | 0.3311 |
| 0.1545 | 13.12 | 2100 | 0.3630 | 0.3109 | 0.3644 | 0.5329 | nan | 0.6530 | 0.3620 | 0.5186 | 0.6261 | 0.0122 | nan | 0.0107 | nan | 0.1922 | 0.4837 | 0.4209 | 0.0 | 0.6339 | 0.3549 | 0.5053 | 0.5850 | 0.0122 | nan | 0.0106 | nan | 0.1847 | 0.4173 | 0.4049 |
| 0.1118 | 13.25 | 2120 | 0.3435 | 0.3579 | 0.4228 | 0.6077 | nan | 0.7231 | 0.4451 | 0.6276 | 0.7012 | 0.0175 | nan | 0.0249 | nan | 0.2535 | 0.5218 | 0.4904 | 0.0 | 0.7009 | 0.4350 | 0.6087 | 0.6502 | 0.0175 | nan | 0.0246 | nan | 0.2404 | 0.4449 | 0.4571 |
| 0.0828 | 13.38 | 2140 | 0.3544 | 0.3361 | 0.3956 | 0.5662 | nan | 0.6733 | 0.3762 | 0.6572 | 0.6600 | 0.0125 | nan | 0.0328 | nan | 0.3596 | 0.3828 | 0.4058 | 0.0 | 0.6520 | 0.3665 | 0.6258 | 0.6161 | 0.0125 | nan | 0.0321 | nan | 0.3236 | 0.3448 | 0.3878 |
| 0.2605 | 13.5 | 2160 | 0.3451 | 0.3732 | 0.4421 | 0.6309 | nan | 0.7398 | 0.4876 | 0.6322 | 0.7386 | 0.0182 | nan | 0.0378 | nan | 0.3453 | 0.4635 | 0.5161 | 0.0 | 0.7155 | 0.4705 | 0.6104 | 0.6734 | 0.0182 | nan | 0.0369 | nan | 0.3171 | 0.4016 | 0.4886 |
| 0.0129 | 13.62 | 2180 | 0.3919 | 0.2765 | 0.3196 | 0.4836 | nan | 0.5955 | 0.3326 | 0.5530 | 0.5408 | 0.0179 | nan | 0.0140 | nan | 0.1846 | 0.2991 | 0.3392 | 0.0 | 0.5800 | 0.3270 | 0.5417 | 0.5094 | 0.0179 | nan | 0.0139 | nan | 0.1754 | 0.2717 | 0.3283 |
| 0.1744 | 13.75 | 2200 | 0.3543 | 0.3287 | 0.3864 | 0.5730 | nan | 0.6835 | 0.3695 | 0.6680 | 0.7169 | 0.0132 | nan | 0.0120 | nan | 0.2276 | 0.4016 | 0.3853 | 0.0 | 0.6621 | 0.3663 | 0.6327 | 0.6620 | 0.0132 | nan | 0.0120 | nan | 0.2145 | 0.3531 | 0.3708 |
| 0.0863 | 13.88 | 2220 | 0.3536 | 0.3503 | 0.4130 | 0.6052 | nan | 0.7206 | 0.4645 | 0.5874 | 0.6916 | 0.0115 | nan | 0.0200 | nan | 0.3492 | 0.3373 | 0.5352 | 0.0 | 0.6962 | 0.4546 | 0.5755 | 0.6385 | 0.0115 | nan | 0.0198 | nan | 0.3158 | 0.2964 | 0.4949 |
| 0.2218 | 14.0 | 2240 | 0.3552 | 0.3527 | 0.4186 | 0.6185 | nan | 0.7443 | 0.4727 | 0.6913 | 0.6287 | 0.0118 | nan | 0.0181 | nan | 0.3010 | 0.3621 | 0.5372 | 0.0 | 0.7196 | 0.4609 | 0.6293 | 0.5846 | 0.0118 | nan | 0.0180 | nan | 0.2818 | 0.3178 | 0.5032 |
| 0.1603 | 14.12 | 2260 | 0.3853 | 0.2835 | 0.3305 | 0.4804 | nan | 0.5713 | 0.3329 | 0.5072 | 0.5800 | 0.0061 | nan | 0.0185 | nan | 0.2341 | 0.3506 | 0.3738 | 0.0 | 0.5570 | 0.3257 | 0.4960 | 0.5457 | 0.0061 | nan | 0.0183 | nan | 0.2195 | 0.3115 | 0.3554 |
| 0.1556 | 14.25 | 2280 | 0.3580 | 0.3469 | 0.4112 | 0.6041 | nan | 0.7349 | 0.4383 | 0.6388 | 0.6923 | 0.0131 | nan | 0.0196 | nan | 0.3261 | 0.4219 | 0.4161 | 0.0 | 0.7132 | 0.4194 | 0.6082 | 0.6401 | 0.0131 | nan | 0.0194 | nan | 0.2961 | 0.3680 | 0.3921 |
| 0.2714 | 14.38 | 2300 | 0.3716 | 0.3215 | 0.3763 | 0.5454 | nan | 0.6469 | 0.3780 | 0.5739 | 0.6474 | 0.0130 | nan | 0.0128 | nan | 0.2433 | 0.4166 | 0.4552 | 0.0 | 0.6285 | 0.3722 | 0.5628 | 0.6024 | 0.0130 | nan | 0.0127 | nan | 0.2288 | 0.3676 | 0.4273 |
| 0.2624 | 14.5 | 2320 | 0.3524 | 0.3357 | 0.3931 | 0.5833 | nan | 0.7198 | 0.4113 | 0.6229 | 0.6339 | 0.0164 | nan | 0.0153 | nan | 0.2559 | 0.4207 | 0.4414 | 0.0 | 0.6967 | 0.4056 | 0.6019 | 0.5973 | 0.0164 | nan | 0.0151 | nan | 0.2376 | 0.3674 | 0.4185 |
| 0.2223 | 14.62 | 2340 | 0.3570 | 0.3148 | 0.3663 | 0.5565 | nan | 0.6820 | 0.4156 | 0.5894 | 0.6286 | 0.0155 | nan | 0.0149 | nan | 0.2473 | 0.3015 | 0.4022 | 0.0 | 0.6633 | 0.4037 | 0.5782 | 0.5907 | 0.0155 | nan | 0.0148 | nan | 0.2266 | 0.2731 | 0.3817 |
| 0.1125 | 14.75 | 2360 | 0.3766 | 0.3027 | 0.3526 | 0.5064 | nan | 0.6058 | 0.3030 | 0.5083 | 0.6290 | 0.0217 | nan | 0.0107 | nan | 0.2239 | 0.4108 | 0.4599 | 0.0 | 0.5916 | 0.2977 | 0.5031 | 0.5910 | 0.0217 | nan | 0.0106 | nan | 0.2101 | 0.3637 | 0.4379 |
| 0.1139 | 14.88 | 2380 | 0.3541 | 0.3752 | 0.4445 | 0.6352 | nan | 0.7230 | 0.5186 | 0.7029 | 0.7593 | 0.0249 | nan | 0.0174 | nan | 0.2855 | 0.4914 | 0.4778 | 0.0 | 0.7011 | 0.4986 | 0.6740 | 0.6961 | 0.0249 | nan | 0.0172 | nan | 0.2652 | 0.4230 | 0.4514 |
| 0.1841 | 15.0 | 2400 | 0.3596 | 0.3337 | 0.3915 | 0.5678 | nan | 0.6749 | 0.3898 | 0.6395 | 0.6779 | 0.0267 | nan | 0.0162 | nan | 0.2609 | 0.4275 | 0.4104 | 0.0 | 0.6576 | 0.3815 | 0.6209 | 0.6334 | 0.0267 | nan | 0.0161 | nan | 0.2406 | 0.3728 | 0.3871 |
| 0.1828 | 15.12 | 2420 | 0.3641 | 0.3060 | 0.3564 | 0.5373 | nan | 0.6464 | 0.3564 | 0.5524 | 0.6620 | 0.0132 | nan | 0.0079 | nan | 0.1817 | 0.3260 | 0.4615 | 0.0 | 0.6299 | 0.3491 | 0.5459 | 0.6195 | 0.0132 | nan | 0.0078 | nan | 0.1742 | 0.2937 | 0.4271 |
| 0.0202 | 15.25 | 2440 | 0.3637 | 0.3310 | 0.3910 | 0.5513 | nan | 0.6071 | 0.4158 | 0.6488 | 0.7365 | 0.0339 | nan | 0.0181 | nan | 0.2819 | 0.3906 | 0.3863 | 0.0 | 0.5933 | 0.3991 | 0.6291 | 0.6687 | 0.0339 | nan | 0.0179 | nan | 0.2585 | 0.3465 | 0.3631 |
| 0.3244 | 15.38 | 2460 | 0.3752 | 0.3290 | 0.3885 | 0.5348 | nan | 0.5915 | 0.4283 | 0.6163 | 0.6207 | 0.0442 | nan | 0.0320 | nan | 0.2746 | 0.4170 | 0.4722 | 0.0 | 0.5762 | 0.4087 | 0.5914 | 0.5837 | 0.0442 | nan | 0.0315 | nan | 0.2498 | 0.3633 | 0.4410 |
| 0.0619 | 15.5 | 2480 | 0.3794 | 0.3052 | 0.3560 | 0.5207 | nan | 0.6266 | 0.3283 | 0.6050 | 0.6392 | 0.0387 | nan | 0.0303 | nan | 0.2255 | 0.3605 | 0.3501 | 0.0 | 0.6104 | 0.3232 | 0.5877 | 0.5951 | 0.0387 | nan | 0.0300 | nan | 0.2125 | 0.3189 | 0.3357 |
| 0.0788 | 15.62 | 2500 | 0.3641 | 0.3564 | 0.4204 | 0.6062 | nan | 0.6924 | 0.5027 | 0.6264 | 0.7243 | 0.0330 | nan | 0.0246 | nan | 0.2279 | 0.4379 | 0.5142 | 0.0 | 0.6721 | 0.4863 | 0.6116 | 0.6645 | 0.0330 | nan | 0.0244 | nan | 0.2181 | 0.3780 | 0.4764 |
| 0.1819 | 15.75 | 2520 | 0.3730 | 0.3330 | 0.3919 | 0.5496 | nan | 0.6366 | 0.3864 | 0.5794 | 0.6584 | 0.0296 | nan | 0.0193 | nan | 0.2730 | 0.4636 | 0.4804 | 0.0 | 0.6209 | 0.3793 | 0.5687 | 0.6158 | 0.0296 | nan | 0.0190 | nan | 0.2520 | 0.3970 | 0.4476 |
| 0.1583 | 15.88 | 2540 | 0.3707 | 0.3437 | 0.4053 | 0.5775 | nan | 0.6838 | 0.4042 | 0.6664 | 0.6669 | 0.0270 | nan | 0.0380 | nan | 0.2653 | 0.4832 | 0.4127 | 0.0 | 0.6658 | 0.3962 | 0.6401 | 0.6213 | 0.0269 | nan | 0.0374 | nan | 0.2491 | 0.4074 | 0.3930 |
| 0.0973 | 16.0 | 2560 | 0.3789 | 0.3102 | 0.3645 | 0.5251 | nan | 0.6202 | 0.3226 | 0.6377 | 0.6612 | 0.0240 | nan | 0.0420 | nan | 0.2656 | 0.3646 | 0.3432 | 0.0 | 0.6064 | 0.3155 | 0.6133 | 0.6167 | 0.0240 | nan | 0.0411 | nan | 0.2490 | 0.3193 | 0.3171 |
| 0.188 | 16.12 | 2580 | 0.3646 | 0.3456 | 0.4070 | 0.5929 | nan | 0.7021 | 0.4230 | 0.6325 | 0.7055 | 0.0310 | nan | 0.0434 | nan | 0.3221 | 0.3235 | 0.4801 | 0.0 | 0.6798 | 0.4127 | 0.6135 | 0.6518 | 0.0309 | nan | 0.0423 | nan | 0.2943 | 0.2847 | 0.4458 |
| 0.0581 | 16.25 | 2600 | 0.4030 | 0.2992 | 0.3506 | 0.4870 | nan | 0.5565 | 0.3320 | 0.6005 | 0.6297 | 0.0621 | nan | 0.0399 | nan | 0.2508 | 0.4016 | 0.2824 | 0.0 | 0.5438 | 0.3243 | 0.5851 | 0.5852 | 0.0610 | nan | 0.0391 | nan | 0.2336 | 0.3470 | 0.2729 |
| 0.1891 | 16.38 | 2620 | 0.3766 | 0.3408 | 0.4029 | 0.5618 | nan | 0.6308 | 0.4264 | 0.6309 | 0.6672 | 0.0537 | nan | 0.0358 | nan | 0.2444 | 0.4289 | 0.5077 | 0.0 | 0.6150 | 0.4147 | 0.6043 | 0.6202 | 0.0528 | nan | 0.0353 | nan | 0.2302 | 0.3673 | 0.4679 |
| 0.2495 | 16.5 | 2640 | 0.3758 | 0.3481 | 0.4106 | 0.5897 | nan | 0.7078 | 0.4004 | 0.6212 | 0.6981 | 0.0769 | nan | 0.0402 | nan | 0.2365 | 0.4410 | 0.4736 | 0.0 | 0.6873 | 0.3937 | 0.6026 | 0.6416 | 0.0749 | nan | 0.0391 | nan | 0.2227 | 0.3776 | 0.4418 |
| 0.153 | 16.62 | 2660 | 0.3899 | 0.2935 | 0.3435 | 0.5074 | nan | 0.6166 | 0.3835 | 0.5905 | 0.5641 | 0.0402 | nan | 0.0290 | nan | 0.2244 | 0.3407 | 0.3025 | 0.0 | 0.6027 | 0.3733 | 0.5639 | 0.5320 | 0.0402 | nan | 0.0286 | nan | 0.2092 | 0.2969 | 0.2883 |
| 0.083 | 16.75 | 2680 | 0.3758 | 0.3137 | 0.3669 | 0.5555 | nan | 0.6916 | 0.3837 | 0.5386 | 0.6336 | 0.0400 | nan | 0.0247 | nan | 0.2183 | 0.3063 | 0.4655 | 0.0 | 0.6742 | 0.3761 | 0.5322 | 0.5845 | 0.0400 | nan | 0.0243 | nan | 0.2066 | 0.2715 | 0.4276 |
| 0.136 | 16.88 | 2700 | 0.3570 | 0.3606 | 0.4264 | 0.6044 | nan | 0.7070 | 0.4583 | 0.6819 | 0.6820 | 0.0525 | nan | 0.0404 | nan | 0.3045 | 0.4473 | 0.4636 | 0.0 | 0.6892 | 0.4490 | 0.6517 | 0.6349 | 0.0524 | nan | 0.0396 | nan | 0.2758 | 0.3799 | 0.4333 |
| 0.0609 | 17.0 | 2720 | 0.3657 | 0.3182 | 0.3719 | 0.5547 | nan | 0.6883 | 0.3931 | 0.5851 | 0.6371 | 0.0482 | nan | 0.0290 | nan | 0.2633 | 0.3435 | 0.3598 | 0.0 | 0.6688 | 0.3841 | 0.5697 | 0.5975 | 0.0482 | nan | 0.0285 | nan | 0.2450 | 0.3019 | 0.3384 |
| 0.1483 | 17.12 | 2740 | 0.3847 | 0.3068 | 0.3620 | 0.5117 | nan | 0.5805 | 0.3386 | 0.5781 | 0.6593 | 0.0445 | nan | 0.0343 | nan | 0.2763 | 0.3060 | 0.4403 | 0.0 | 0.5686 | 0.3269 | 0.5604 | 0.6165 | 0.0439 | nan | 0.0335 | nan | 0.2507 | 0.2700 | 0.3978 |
| 0.2166 | 17.25 | 2760 | 0.3986 | 0.2853 | 0.3307 | 0.5154 | nan | 0.6550 | 0.3083 | 0.5705 | 0.6325 | 0.0442 | nan | 0.0205 | nan | 0.1784 | 0.2618 | 0.3047 | 0.0 | 0.6390 | 0.3038 | 0.5618 | 0.5883 | 0.0441 | nan | 0.0203 | nan | 0.1703 | 0.2369 | 0.2887 |
| 0.096 | 17.38 | 2780 | 0.4041 | 0.2884 | 0.3367 | 0.4939 | nan | 0.6043 | 0.2952 | 0.5538 | 0.6037 | 0.0470 | nan | 0.0231 | nan | 0.2257 | 0.3329 | 0.3444 | 0.0 | 0.5912 | 0.2905 | 0.5458 | 0.5617 | 0.0470 | nan | 0.0228 | nan | 0.2106 | 0.2930 | 0.3217 |
| 0.0839 | 17.5 | 2800 | 0.3773 | 0.3516 | 0.4152 | 0.6011 | nan | 0.7099 | 0.4639 | 0.6553 | 0.7115 | 0.0586 | nan | 0.0328 | nan | 0.3030 | 0.3879 | 0.4140 | 0.0 | 0.6884 | 0.4484 | 0.6341 | 0.6528 | 0.0586 | nan | 0.0324 | nan | 0.2784 | 0.3349 | 0.3877 |
| 0.1185 | 17.62 | 2820 | 0.3753 | 0.3255 | 0.3816 | 0.5517 | nan | 0.6575 | 0.4173 | 0.6167 | 0.6411 | 0.0711 | nan | 0.0270 | nan | 0.2554 | 0.3842 | 0.3643 | 0.0 | 0.6400 | 0.4044 | 0.6019 | 0.5946 | 0.0708 | nan | 0.0267 | nan | 0.2403 | 0.3319 | 0.3448 |
| 0.1155 | 17.75 | 2840 | 0.3742 | 0.3354 | 0.3939 | 0.5575 | nan | 0.6693 | 0.3661 | 0.6355 | 0.6422 | 0.0786 | nan | 0.0323 | nan | 0.2502 | 0.4593 | 0.4115 | 0.0 | 0.6510 | 0.3595 | 0.6193 | 0.5977 | 0.0776 | nan | 0.0320 | nan | 0.2347 | 0.3894 | 0.3928 |
| 0.1311 | 17.88 | 2860 | 0.3862 | 0.3306 | 0.3882 | 0.5436 | nan | 0.6268 | 0.3595 | 0.6226 | 0.6840 | 0.0823 | nan | 0.0400 | nan | 0.2510 | 0.4016 | 0.4261 | 0.0 | 0.6125 | 0.3525 | 0.6087 | 0.6301 | 0.0816 | nan | 0.0394 | nan | 0.2348 | 0.3472 | 0.3995 |
| 0.0059 | 18.0 | 2880 | 0.3997 | 0.2999 | 0.3509 | 0.5209 | nan | 0.6406 | 0.3559 | 0.6409 | 0.5987 | 0.0524 | nan | 0.0490 | nan | 0.2912 | 0.2650 | 0.2648 | 0.0 | 0.6221 | 0.3495 | 0.6051 | 0.5625 | 0.0523 | nan | 0.0474 | nan | 0.2684 | 0.2353 | 0.2563 |
| 0.2505 | 18.12 | 2900 | 0.3948 | 0.3107 | 0.3648 | 0.5266 | nan | 0.6174 | 0.3923 | 0.5960 | 0.6209 | 0.0640 | nan | 0.0383 | nan | 0.2507 | 0.3099 | 0.3937 | 0.0 | 0.6028 | 0.3777 | 0.5739 | 0.5786 | 0.0634 | nan | 0.0375 | nan | 0.2350 | 0.2722 | 0.3660 |
| 0.1181 | 18.25 | 2920 | 0.3764 | 0.3401 | 0.3999 | 0.5709 | nan | 0.6792 | 0.4368 | 0.6006 | 0.6382 | 0.0752 | nan | 0.0399 | nan | 0.2711 | 0.3991 | 0.4587 | 0.0 | 0.6604 | 0.4226 | 0.5815 | 0.5978 | 0.0736 | nan | 0.0392 | nan | 0.2526 | 0.3440 | 0.4290 |
| 0.1169 | 18.38 | 2940 | 0.3737 | 0.3522 | 0.4141 | 0.5853 | nan | 0.6818 | 0.4810 | 0.6037 | 0.6819 | 0.1025 | nan | 0.0410 | nan | 0.2644 | 0.4201 | 0.4503 | 0.0 | 0.6615 | 0.4636 | 0.5918 | 0.6288 | 0.1004 | nan | 0.0400 | nan | 0.2486 | 0.3615 | 0.4256 |
| 0.1075 | 18.5 | 2960 | 0.3985 | 0.3176 | 0.3711 | 0.5438 | nan | 0.6841 | 0.3154 | 0.5765 | 0.6472 | 0.0906 | nan | 0.0435 | nan | 0.2634 | 0.3396 | 0.3801 | 0.0 | 0.6643 | 0.3104 | 0.5673 | 0.6015 | 0.0897 | nan | 0.0423 | nan | 0.2444 | 0.3020 | 0.3537 |
| 0.1468 | 18.62 | 2980 | 0.3809 | 0.3592 | 0.4249 | 0.5883 | nan | 0.6884 | 0.4059 | 0.6539 | 0.6849 | 0.0905 | nan | 0.0482 | nan | 0.2846 | 0.4887 | 0.4791 | 0.0 | 0.6685 | 0.3996 | 0.6290 | 0.6350 | 0.0881 | nan | 0.0469 | nan | 0.2632 | 0.4144 | 0.4469 |
| 0.1438 | 18.75 | 3000 | 0.4059 | 0.3267 | 0.3847 | 0.5308 | nan | 0.6163 | 0.4193 | 0.5594 | 0.6195 | 0.1263 | nan | 0.0415 | nan | 0.2273 | 0.4443 | 0.4081 | 0.0 | 0.6014 | 0.4035 | 0.5470 | 0.5813 | 0.1224 | nan | 0.0406 | nan | 0.2127 | 0.3774 | 0.3810 |
| 0.1021 | 18.88 | 3020 | 0.3904 | 0.3525 | 0.4154 | 0.5957 | nan | 0.7113 | 0.4367 | 0.6382 | 0.7089 | 0.1015 | nan | 0.0432 | nan | 0.2595 | 0.4177 | 0.4213 | 0.0 | 0.6907 | 0.4264 | 0.6212 | 0.6528 | 0.0988 | nan | 0.0422 | nan | 0.2428 | 0.3582 | 0.3923 |
| 0.0308 | 19.0 | 3040 | 0.3790 | 0.3502 | 0.4129 | 0.5842 | nan | 0.6850 | 0.4325 | 0.6381 | 0.6733 | 0.0994 | nan | 0.0429 | nan | 0.2607 | 0.4038 | 0.4803 | 0.0 | 0.6662 | 0.4228 | 0.6189 | 0.6261 | 0.0967 | nan | 0.0420 | nan | 0.2443 | 0.3470 | 0.4380 |
| 0.2127 | 19.12 | 3060 | 0.3938 | 0.3268 | 0.3842 | 0.5413 | nan | 0.6469 | 0.3713 | 0.5834 | 0.6338 | 0.0856 | nan | 0.0419 | nan | 0.2657 | 0.4172 | 0.4118 | 0.0 | 0.6307 | 0.3641 | 0.5711 | 0.5914 | 0.0838 | nan | 0.0410 | nan | 0.2465 | 0.3570 | 0.3822 |
| 0.1228 | 19.25 | 3080 | 0.3930 | 0.3371 | 0.3961 | 0.5707 | nan | 0.6900 | 0.3839 | 0.5852 | 0.6874 | 0.0759 | nan | 0.0337 | nan | 0.2645 | 0.4055 | 0.4391 | 0.0 | 0.6701 | 0.3767 | 0.5733 | 0.6383 | 0.0745 | nan | 0.0331 | nan | 0.2462 | 0.3494 | 0.4094 |
| 0.0882 | 19.38 | 3100 | 0.3940 | 0.3372 | 0.3954 | 0.5712 | nan | 0.6829 | 0.4213 | 0.6223 | 0.6689 | 0.0740 | nan | 0.0338 | nan | 0.2622 | 0.3936 | 0.3997 | 0.0 | 0.6636 | 0.4108 | 0.6046 | 0.6226 | 0.0729 | nan | 0.0332 | nan | 0.2445 | 0.3419 | 0.3776 |
| 0.0798 | 19.5 | 3120 | 0.4141 | 0.3078 | 0.3595 | 0.5247 | nan | 0.6568 | 0.2977 | 0.5794 | 0.6253 | 0.0788 | nan | 0.0341 | nan | 0.2380 | 0.3701 | 0.3549 | 0.0 | 0.6388 | 0.2941 | 0.5651 | 0.5852 | 0.0775 | nan | 0.0332 | nan | 0.2237 | 0.3256 | 0.3351 |
| 0.2337 | 19.62 | 3140 | 0.3981 | 0.3068 | 0.3583 | 0.5271 | nan | 0.6597 | 0.3512 | 0.5748 | 0.5754 | 0.0715 | nan | 0.0304 | nan | 0.2115 | 0.3767 | 0.3733 | 0.0 | 0.6439 | 0.3439 | 0.5569 | 0.5412 | 0.0701 | nan | 0.0299 | nan | 0.2004 | 0.3301 | 0.3515 |
| 0.265 | 19.75 | 3160 | 0.3878 | 0.3441 | 0.4051 | 0.5883 | nan | 0.7216 | 0.4309 | 0.6112 | 0.6456 | 0.1042 | nan | 0.0316 | nan | 0.2539 | 0.3974 | 0.4499 | 0.0 | 0.6987 | 0.4186 | 0.5900 | 0.6016 | 0.1009 | nan | 0.0311 | nan | 0.2385 | 0.3425 | 0.4188 |
| 0.1612 | 19.88 | 3180 | 0.3921 | 0.3399 | 0.4015 | 0.5622 | nan | 0.6680 | 0.3758 | 0.6031 | 0.6745 | 0.1219 | nan | 0.0368 | nan | 0.2783 | 0.4113 | 0.4439 | 0.0 | 0.6509 | 0.3660 | 0.5870 | 0.6189 | 0.1176 | nan | 0.0361 | nan | 0.2584 | 0.3531 | 0.4113 |
| 0.119 | 20.0 | 3200 | 0.3930 | 0.3381 | 0.3973 | 0.5638 | nan | 0.6846 | 0.3830 | 0.6002 | 0.6528 | 0.1236 | nan | 0.0364 | nan | 0.2504 | 0.4254 | 0.4196 | 0.0 | 0.6659 | 0.3747 | 0.5862 | 0.6027 | 0.1201 | nan | 0.0357 | nan | 0.2355 | 0.3642 | 0.3958 |
| 0.2127 | 20.12 | 3220 | 0.4055 | 0.3262 | 0.3828 | 0.5438 | nan | 0.6687 | 0.3572 | 0.5735 | 0.6260 | 0.1227 | nan | 0.0452 | nan | 0.2759 | 0.3804 | 0.3954 | 0.0 | 0.6509 | 0.3480 | 0.5618 | 0.5802 | 0.1193 | nan | 0.0442 | nan | 0.2554 | 0.3293 | 0.3729 |
| 0.1082 | 20.25 | 3240 | 0.4076 | 0.3233 | 0.3790 | 0.5477 | nan | 0.6622 | 0.3678 | 0.6170 | 0.6725 | 0.1201 | nan | 0.0408 | nan | 0.2368 | 0.3502 | 0.3435 | 0.0 | 0.6445 | 0.3588 | 0.5987 | 0.6188 | 0.1167 | nan | 0.0400 | nan | 0.2242 | 0.3062 | 0.3253 |
| 0.1413 | 20.38 | 3260 | 0.3997 | 0.3315 | 0.3894 | 0.5515 | nan | 0.6595 | 0.3799 | 0.5992 | 0.6447 | 0.1206 | nan | 0.0406 | nan | 0.2610 | 0.3701 | 0.4294 | 0.0 | 0.6426 | 0.3715 | 0.5838 | 0.5996 | 0.1176 | nan | 0.0399 | nan | 0.2422 | 0.3200 | 0.3973 |
| 0.1308 | 20.5 | 3280 | 0.4210 | 0.3100 | 0.3624 | 0.5150 | nan | 0.6198 | 0.3069 | 0.5631 | 0.6500 | 0.1115 | nan | 0.0402 | nan | 0.2334 | 0.3535 | 0.3831 | 0.0 | 0.6049 | 0.3025 | 0.5540 | 0.6033 | 0.1102 | nan | 0.0393 | nan | 0.2189 | 0.3098 | 0.3574 |
| 0.0599 | 20.62 | 3300 | 0.3929 | 0.3457 | 0.4067 | 0.5822 | nan | 0.6963 | 0.4452 | 0.6235 | 0.6543 | 0.1024 | nan | 0.0453 | nan | 0.2641 | 0.3904 | 0.4386 | 0.0 | 0.6756 | 0.4337 | 0.6062 | 0.6087 | 0.1002 | nan | 0.0443 | nan | 0.2460 | 0.3365 | 0.4061 |
| 0.0948 | 20.75 | 3320 | 0.3978 | 0.3346 | 0.3933 | 0.5553 | nan | 0.6649 | 0.4108 | 0.6084 | 0.6221 | 0.1142 | nan | 0.0400 | nan | 0.2518 | 0.4197 | 0.4076 | 0.0 | 0.6479 | 0.3990 | 0.5918 | 0.5812 | 0.1109 | nan | 0.0393 | nan | 0.2337 | 0.3621 | 0.3800 |
| 0.0418 | 20.88 | 3340 | 0.3987 | 0.3413 | 0.4016 | 0.5691 | nan | 0.6820 | 0.4091 | 0.6041 | 0.6482 | 0.1110 | nan | 0.0344 | nan | 0.2501 | 0.4283 | 0.4476 | 0.0 | 0.6632 | 0.3996 | 0.5894 | 0.6026 | 0.1079 | nan | 0.0337 | nan | 0.2342 | 0.3677 | 0.4142 |
| 0.2251 | 21.0 | 3360 | 0.4091 | 0.3067 | 0.3576 | 0.5235 | nan | 0.6628 | 0.3239 | 0.5825 | 0.6036 | 0.1097 | nan | 0.0348 | nan | 0.2448 | 0.3554 | 0.3010 | 0.0 | 0.6446 | 0.3177 | 0.5663 | 0.5655 | 0.1064 | nan | 0.0340 | nan | 0.2288 | 0.3149 | 0.2890 |
| 0.0887 | 21.12 | 3380 | 0.3958 | 0.3502 | 0.4144 | 0.5830 | nan | 0.6798 | 0.4381 | 0.6436 | 0.7003 | 0.0985 | nan | 0.0553 | nan | 0.3298 | 0.3692 | 0.4148 | 0.0 | 0.6608 | 0.4252 | 0.6204 | 0.6454 | 0.0957 | nan | 0.0537 | nan | 0.2975 | 0.3220 | 0.3809 |
| 0.0981 | 21.25 | 3400 | 0.4053 | 0.3346 | 0.3938 | 0.5521 | nan | 0.6416 | 0.4158 | 0.6278 | 0.6327 | 0.1152 | nan | 0.0472 | nan | 0.2642 | 0.3607 | 0.4387 | 0.0 | 0.6267 | 0.4034 | 0.6096 | 0.5906 | 0.1116 | nan | 0.0460 | nan | 0.2430 | 0.3153 | 0.3998 |
| 0.129 | 21.38 | 3420 | 0.4044 | 0.3195 | 0.3724 | 0.5357 | nan | 0.6534 | 0.3715 | 0.5931 | 0.6088 | 0.1166 | nan | 0.0411 | nan | 0.2463 | 0.3438 | 0.3765 | 0.0 | 0.6364 | 0.3635 | 0.5791 | 0.5711 | 0.1137 | nan | 0.0403 | nan | 0.2292 | 0.3042 | 0.3575 |
| 0.0612 | 21.5 | 3440 | 0.3903 | 0.3486 | 0.4101 | 0.5821 | nan | 0.6849 | 0.4416 | 0.6240 | 0.6569 | 0.1031 | nan | 0.0424 | nan | 0.2770 | 0.3685 | 0.4925 | 0.0 | 0.6663 | 0.4307 | 0.6044 | 0.6109 | 0.0999 | nan | 0.0416 | nan | 0.2569 | 0.3219 | 0.4536 |
| 0.1272 | 21.62 | 3460 | 0.4190 | 0.3084 | 0.3600 | 0.5155 | nan | 0.6333 | 0.3347 | 0.5558 | 0.6046 | 0.0905 | nan | 0.0351 | nan | 0.2328 | 0.4021 | 0.3511 | 0.0 | 0.6178 | 0.3281 | 0.5458 | 0.5679 | 0.0895 | nan | 0.0345 | nan | 0.2190 | 0.3491 | 0.3325 |
| 0.0396 | 21.75 | 3480 | 0.4083 | 0.3243 | 0.3801 | 0.5463 | nan | 0.6695 | 0.3636 | 0.6050 | 0.6290 | 0.0988 | nan | 0.0423 | nan | 0.2516 | 0.3892 | 0.3716 | 0.0 | 0.6523 | 0.3566 | 0.5879 | 0.5871 | 0.0963 | nan | 0.0414 | nan | 0.2360 | 0.3372 | 0.3485 |
| 0.1612 | 21.88 | 3500 | 0.4034 | 0.3276 | 0.3836 | 0.5571 | nan | 0.6817 | 0.4096 | 0.5779 | 0.6470 | 0.1009 | nan | 0.0430 | nan | 0.2657 | 0.3577 | 0.3690 | 0.0 | 0.6632 | 0.3995 | 0.5630 | 0.6014 | 0.0982 | nan | 0.0420 | nan | 0.2475 | 0.3112 | 0.3502 |
| 0.168 | 22.0 | 3520 | 0.3960 | 0.3299 | 0.3866 | 0.5620 | nan | 0.6894 | 0.3956 | 0.5776 | 0.6484 | 0.0963 | nan | 0.0393 | nan | 0.2551 | 0.3656 | 0.4123 | 0.0 | 0.6689 | 0.3867 | 0.5646 | 0.6040 | 0.0935 | nan | 0.0384 | nan | 0.2394 | 0.3177 | 0.3853 |
| 0.1447 | 22.12 | 3540 | 0.4078 | 0.3373 | 0.3970 | 0.5641 | nan | 0.6879 | 0.3803 | 0.5834 | 0.6534 | 0.0952 | nan | 0.0500 | nan | 0.2945 | 0.4033 | 0.4248 | 0.0 | 0.6674 | 0.3724 | 0.5691 | 0.6079 | 0.0923 | nan | 0.0486 | nan | 0.2704 | 0.3478 | 0.3974 |
| 0.053 | 22.25 | 3560 | 0.4011 | 0.3369 | 0.3956 | 0.5696 | nan | 0.7026 | 0.4088 | 0.5707 | 0.6348 | 0.0921 | nan | 0.0507 | nan | 0.2935 | 0.3820 | 0.4256 | 0.0 | 0.6803 | 0.3990 | 0.5570 | 0.5941 | 0.0895 | nan | 0.0494 | nan | 0.2709 | 0.3313 | 0.3978 |
| 0.1187 | 22.38 | 3580 | 0.4053 | 0.3330 | 0.3913 | 0.5555 | nan | 0.6709 | 0.3798 | 0.5832 | 0.6476 | 0.0971 | nan | 0.0440 | nan | 0.2728 | 0.3996 | 0.4266 | 0.0 | 0.6529 | 0.3718 | 0.5682 | 0.6036 | 0.0940 | nan | 0.0429 | nan | 0.2524 | 0.3462 | 0.3977 |
| 0.1373 | 22.5 | 3600 | 0.4094 | 0.3264 | 0.3829 | 0.5411 | nan | 0.6504 | 0.3713 | 0.5818 | 0.6355 | 0.1050 | nan | 0.0402 | nan | 0.2590 | 0.4045 | 0.3984 | 0.0 | 0.6343 | 0.3624 | 0.5688 | 0.5951 | 0.1009 | nan | 0.0394 | nan | 0.2407 | 0.3515 | 0.3709 |
| 0.144 | 22.62 | 3620 | 0.4051 | 0.3243 | 0.3792 | 0.5460 | nan | 0.6617 | 0.3776 | 0.6009 | 0.6535 | 0.1049 | nan | 0.0384 | nan | 0.2643 | 0.3601 | 0.3512 | 0.0 | 0.6448 | 0.3688 | 0.5872 | 0.6082 | 0.1005 | nan | 0.0377 | nan | 0.2463 | 0.3173 | 0.3320 |
| 0.0716 | 22.75 | 3640 | 0.4071 | 0.3209 | 0.3750 | 0.5386 | nan | 0.6443 | 0.3817 | 0.5979 | 0.6464 | 0.1080 | nan | 0.0381 | nan | 0.2470 | 0.3450 | 0.3667 | 0.0 | 0.6288 | 0.3727 | 0.5835 | 0.6006 | 0.1032 | nan | 0.0374 | nan | 0.2315 | 0.3057 | 0.3453 |
| 0.0869 | 22.88 | 3660 | 0.4162 | 0.3118 | 0.3636 | 0.5269 | nan | 0.6438 | 0.3654 | 0.5745 | 0.6012 | 0.1058 | nan | 0.0359 | nan | 0.2423 | 0.3257 | 0.3781 | 0.0 | 0.6281 | 0.3557 | 0.5610 | 0.5643 | 0.1014 | nan | 0.0353 | nan | 0.2266 | 0.2902 | 0.3554 |
| 0.0846 | 23.0 | 3680 | 0.4079 | 0.3326 | 0.3902 | 0.5584 | nan | 0.6786 | 0.3830 | 0.5896 | 0.6438 | 0.1063 | nan | 0.0384 | nan | 0.2539 | 0.3928 | 0.4253 | 0.0 | 0.6605 | 0.3742 | 0.5765 | 0.5994 | 0.1019 | nan | 0.0376 | nan | 0.2376 | 0.3425 | 0.3957 |
| 0.1137 | 23.12 | 3700 | 0.4062 | 0.3270 | 0.3827 | 0.5498 | nan | 0.6647 | 0.3757 | 0.6069 | 0.6502 | 0.1083 | nan | 0.0380 | nan | 0.2465 | 0.3729 | 0.3814 | 0.0 | 0.6474 | 0.3675 | 0.5903 | 0.6042 | 0.1038 | nan | 0.0372 | nan | 0.2317 | 0.3290 | 0.3588 |
| 0.109 | 23.25 | 3720 | 0.4160 | 0.3217 | 0.3761 | 0.5445 | nan | 0.6679 | 0.3698 | 0.5816 | 0.6386 | 0.1013 | nan | 0.0424 | nan | 0.2568 | 0.3538 | 0.3723 | 0.0 | 0.6503 | 0.3618 | 0.5679 | 0.5943 | 0.0978 | nan | 0.0415 | nan | 0.2394 | 0.3129 | 0.3514 |
| 0.0314 | 23.38 | 3740 | 0.4166 | 0.3156 | 0.3685 | 0.5342 | nan | 0.6615 | 0.3651 | 0.5650 | 0.6126 | 0.1027 | nan | 0.0433 | nan | 0.2581 | 0.3479 | 0.3605 | 0.0 | 0.6445 | 0.3558 | 0.5521 | 0.5746 | 0.0988 | nan | 0.0424 | nan | 0.2393 | 0.3083 | 0.3401 |
| 0.1021 | 23.5 | 3760 | 0.4215 | 0.3218 | 0.3774 | 0.5350 | nan | 0.6494 | 0.3625 | 0.5961 | 0.6217 | 0.1038 | nan | 0.0448 | nan | 0.2675 | 0.3971 | 0.3539 | 0.0 | 0.6335 | 0.3532 | 0.5796 | 0.5818 | 0.0997 | nan | 0.0438 | nan | 0.2465 | 0.3460 | 0.3336 |
| 0.0086 | 23.62 | 3780 | 0.4121 | 0.3224 | 0.3779 | 0.5430 | nan | 0.6630 | 0.3837 | 0.5842 | 0.6257 | 0.1028 | nan | 0.0432 | nan | 0.2651 | 0.3702 | 0.3634 | 0.0 | 0.6461 | 0.3725 | 0.5697 | 0.5847 | 0.0990 | nan | 0.0423 | nan | 0.2451 | 0.3231 | 0.3413 |
| 0.1086 | 23.75 | 3800 | 0.4078 | 0.3309 | 0.3892 | 0.5521 | nan | 0.6630 | 0.3944 | 0.5991 | 0.6368 | 0.1092 | nan | 0.0444 | nan | 0.2721 | 0.3831 | 0.4007 | 0.0 | 0.6464 | 0.3824 | 0.5830 | 0.5932 | 0.1047 | nan | 0.0435 | nan | 0.2511 | 0.3331 | 0.3716 |
| 0.1121 | 23.88 | 3820 | 0.4119 | 0.3221 | 0.3780 | 0.5369 | nan | 0.6487 | 0.3695 | 0.5765 | 0.6271 | 0.1155 | nan | 0.0431 | nan | 0.2546 | 0.3730 | 0.3941 | 0.0 | 0.6329 | 0.3588 | 0.5643 | 0.5842 | 0.1105 | nan | 0.0422 | nan | 0.2368 | 0.3252 | 0.3666 |
| 0.1118 | 24.0 | 3840 | 0.4161 | 0.3239 | 0.3799 | 0.5430 | nan | 0.6645 | 0.3623 | 0.5649 | 0.6499 | 0.1168 | nan | 0.0422 | nan | 0.2492 | 0.3927 | 0.3769 | 0.0 | 0.6471 | 0.3532 | 0.5559 | 0.6011 | 0.1122 | nan | 0.0413 | nan | 0.2332 | 0.3407 | 0.3547 |
| 0.0399 | 24.12 | 3860 | 0.4107 | 0.3313 | 0.3888 | 0.5557 | nan | 0.6795 | 0.3847 | 0.5899 | 0.6415 | 0.1208 | nan | 0.0444 | nan | 0.2652 | 0.3843 | 0.3891 | 0.0 | 0.6607 | 0.3748 | 0.5767 | 0.5956 | 0.1152 | nan | 0.0435 | nan | 0.2462 | 0.3345 | 0.3660 |
| 0.0892 | 24.25 | 3880 | 0.4233 | 0.3200 | 0.3743 | 0.5371 | nan | 0.6571 | 0.3608 | 0.5803 | 0.6290 | 0.1144 | nan | 0.0414 | nan | 0.2556 | 0.3617 | 0.3685 | 0.0 | 0.6402 | 0.3525 | 0.5683 | 0.5850 | 0.1098 | nan | 0.0406 | nan | 0.2377 | 0.3184 | 0.3476 |
| 0.0504 | 24.38 | 3900 | 0.4126 | 0.3272 | 0.3838 | 0.5469 | nan | 0.6627 | 0.3726 | 0.5902 | 0.6472 | 0.1122 | nan | 0.0434 | nan | 0.2626 | 0.3820 | 0.3810 | 0.0 | 0.6457 | 0.3638 | 0.5776 | 0.5995 | 0.1075 | nan | 0.0425 | nan | 0.2436 | 0.3331 | 0.3583 |
| 0.1415 | 24.5 | 3920 | 0.4125 | 0.3313 | 0.3890 | 0.5544 | nan | 0.6739 | 0.3827 | 0.5877 | 0.6506 | 0.1154 | nan | 0.0461 | nan | 0.2685 | 0.3880 | 0.3885 | 0.0 | 0.6554 | 0.3725 | 0.5757 | 0.6023 | 0.1107 | nan | 0.0451 | nan | 0.2488 | 0.3378 | 0.3647 |
| 0.0919 | 24.62 | 3940 | 0.4155 | 0.3205 | 0.3752 | 0.5383 | nan | 0.6617 | 0.3538 | 0.5750 | 0.6351 | 0.1112 | nan | 0.0444 | nan | 0.2523 | 0.3764 | 0.3666 | 0.0 | 0.6444 | 0.3457 | 0.5639 | 0.5899 | 0.1069 | nan | 0.0434 | nan | 0.2354 | 0.3296 | 0.3458 |
| 0.0443 | 24.75 | 3960 | 0.4101 | 0.3270 | 0.3837 | 0.5447 | nan | 0.6616 | 0.3704 | 0.5699 | 0.6538 | 0.1158 | nan | 0.0460 | nan | 0.2670 | 0.3933 | 0.3753 | 0.0 | 0.6444 | 0.3607 | 0.5605 | 0.6039 | 0.1111 | nan | 0.0450 | nan | 0.2475 | 0.3427 | 0.3542 |
| 0.1815 | 24.88 | 3980 | 0.4182 | 0.3204 | 0.3750 | 0.5340 | nan | 0.6499 | 0.3529 | 0.5731 | 0.6316 | 0.1129 | nan | 0.0422 | nan | 0.2545 | 0.3768 | 0.3816 | 0.0 | 0.6337 | 0.3452 | 0.5621 | 0.5873 | 0.1083 | nan | 0.0413 | nan | 0.2370 | 0.3306 | 0.3586 |
| 0.005 | 25.0 | 4000 | 0.4155 | 0.3349 | 0.3935 | 0.5591 | nan | 0.6815 | 0.3865 | 0.5805 | 0.6544 | 0.1155 | nan | 0.0497 | nan | 0.2779 | 0.3995 | 0.3959 | 0.0 | 0.6626 | 0.3764 | 0.5699 | 0.6056 | 0.1108 | nan | 0.0485 | nan | 0.2565 | 0.3465 | 0.3718 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
NovusResearch/Thestral-7B-0.1-TR-EN
|
NovusResearch
| 2024-01-25T10:37:18Z | 0 | 1 |
transformers
|
[
"transformers",
"text-generation",
"tr",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T09:06:30Z |
---
license: mit
language:
- tr
- en
library_name: transformers
pipeline_tag: text-generation
---
# Model Card: Thestral-7B-0.1-TR-EN
## Version: Based on Mistral-7B-v0.1: Further pretrained on a large corpus of English-to-Turkish dataset.
The training dataset consists of approximately 490,000 pairs of translated texts. These pairs are predominantly sourced from movie subtitles, offering a diverse range of linguistic structures, idiomatic expressions, and cultural references. This rich dataset ensures the model is well-equipped to handle a variety of translation tasks within its domain.
## Model Training
Details regarding the model's training procedure, architecture, and fine-tuning processes will be extensively covered in the upcoming paper.
## Example Outputs (Finetuned Model)
```
Question: Acıbadem Mahallesi’nin sessiz ve huzurlu bir sokağından
bugünlere uzanan Erman Yaşar - Emre Özcan ikilisi Atıf Bey Sokak'ın
yeni bölümünde; Galatasaray'ın Trabzonspor'u deplasmanda 5-1 mağlup
etmesini, Wilfried Zaha'nın 9 numara performansını, Mauro Icardi'nin
dönmesiyle birlikte oluşabilecek hücum rotasyonunu, Fenerbahçe'nin
Samsunspor ile berabere kalmasını, İsmail Kartal'ın yaptığı değişiklikleri,
Kenan Yıldız'ın Juventus'ta ilk 11'e yerleşmesini ve Euro 2024'teki
hücum gücümüzü konuştu.
Answer: From the quiet and peaceful street of Acibadem Mahallesi to today;
Erman Yaşar - Emre Özcan duo in the new section of Atif Bey Street;
Galatasaray beating Trabzonspor 5-1 in the matchday, Wilfried Zaha's
performance at number 9, Mauro Icardi's return, Fenerbahçe drawing with
Samsunspor, Ismail Kartal's changes, Kenan Yıldız's place in the starting
XI at Juventus and our power for Euro 2024 were discussed.
```
## Example Outputs (Base Model)
```
Question: Acıbadem Mahallesi’nin sessiz ve huzurlu bir sokağından
bugünlere uzanan Erman Yaşar - Emre Özcan ikilisi Atıf Bey Sokak'ın
yeni bölümünde; Galatasaray'ın Trabzonspor'u deplasmanda 5-1 mağlup
etmesini, Wilfried Zaha'nın 9 numara performansını, Mauro Icardi'nin
dönmesiyle birlikte oluşabilecek hücum rotasyonunu, Fenerbahçe'nin
Samsunspor ile berabere kalmasını, İsmail Kartal'ın yaptığı değişiklikleri,
Kenan Yıldız'ın Juventus'ta ilk 11'e yerleşmesini ve Euro 2024'teki
hücum gücümüzü konuştu.
Answer: The sentence is about a street in Acibadem neighborhood that
is quiet and peaceful. The sentence talks about the new section of Atif Bey
Street, which extends from today to the past. It also mentions the victory of
Galatasaray over Trabzonspor in the match, the performance of Wilfried Zaha with
number 9, the return of Mauro Icardi, the draw between Fenerbahce and Samsunspor,
the changes made by Ismail Kartal, and the first eleven of Kenan Yildiz at Juventus.</s>
```
```python
model = AutoModelForCausalLM.from_pretrained("NovusResearch/Thestral-7B-0.1-TR-EN",
use_flash_attention_2=True,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True)
```
|
Prathyash/phi-1_5-finetuned-gsm8k
|
Prathyash
| 2024-01-25T10:31:56Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"license:mit",
"region:us"
] | null | 2024-01-24T07:02:16Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-1_5
model-index:
- name: phi-1_5-finetuned-gsm8k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-finetuned-gsm8k
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 100
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
LazarusNLP/all-indobert-base
|
LazarusNLP
| 2024-01-25T10:28:59Z | 75 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"ind",
"dataset:indonli",
"dataset:indolem/indo_story_cloze",
"dataset:unicamp-dl/mmarco",
"dataset:miracl/miracl",
"dataset:SEACrowd/wrete",
"dataset:SEACrowd/indolem_ntp",
"dataset:khalidalt/tydiqa-goldp",
"dataset:SEACrowd/facqa",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-25T09:37:10Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- indonli
- indolem/indo_story_cloze
- unicamp-dl/mmarco
- miracl/miracl
- SEACrowd/wrete
- SEACrowd/indolem_ntp
- khalidalt/tydiqa-goldp
- SEACrowd/facqa
language:
- ind
---
# LazarusNLP/all-indobert-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('LazarusNLP/all-indobert-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('LazarusNLP/all-indobert-base')
model = AutoModel.from_pretrained('LazarusNLP/all-indobert-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=LazarusNLP/all-indobert-base)
## Training
The model was trained with the parameters:
**DataLoader**:
`MultiDatasetDataLoader.MultiDatasetDataLoader` of length 352 with parameters:
```
{'batch_size_pairs': 384, 'batch_size_triplets': 256}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 176,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
anantg/zephyr-7b-beta-finetuned
|
anantg
| 2024-01-25T10:21:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T10:21:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kralley/mistral-7b-da-instr
|
Kralley
| 2024-01-25T10:20:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-23T14:00:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
arun100/whisper-base-hi-4
|
arun100
| 2024-01-25T10:20:39Z | 61 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_16_0",
"base_model:arun100/whisper-base-hi-3",
"base_model:finetune:arun100/whisper-base-hi-3",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-25T06:09:30Z |
---
language:
- hi
license: apache-2.0
base_model: arun100/whisper-base-hi-3
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
metrics:
- wer
model-index:
- name: Whisper Base Hindi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_16_0 hi
type: mozilla-foundation/common_voice_16_0
config: hi
split: test
args: hi
metrics:
- name: Wer
type: wer
value: 27.6637932833796
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Hindi
This model is a fine-tuned version of [arun100/whisper-base-hi-3](https://huggingface.co/arun100/whisper-base-hi-3) on the mozilla-foundation/common_voice_16_0 hi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4681
- Wer: 27.6638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1251 | 13.16 | 1000 | 0.4681 | 27.6638 |
| 0.0812 | 26.32 | 2000 | 0.5046 | 28.2065 |
| 0.0584 | 39.47 | 3000 | 0.5393 | 28.3046 |
| 0.0441 | 52.63 | 4000 | 0.5639 | 28.4924 |
| 0.0392 | 65.79 | 5000 | 0.5734 | 28.5863 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
teticio/audio-diffusion-instrumental-hiphop-256
|
teticio
| 2024-01-25T10:06:12Z | 68 | 6 |
diffusers
|
[
"diffusers",
"tensorboard",
"audio",
"spectrograms",
"dataset:teticio/audio-diffusion-instrumental-hiphop-256",
"diffusers:AudioDiffusionPipeline",
"region:us"
] | null | 2022-09-17T14:51:49Z |
---
tags:
- audio
- spectrograms
datasets:
- teticio/audio-diffusion-instrumental-hiphop-256
---
Denoising Diffusion Probabilistic Model trained on [teticio/audio-diffusion-instrumental-hiphop-256](https://huggingface.co/datasets/teticio/audio-diffusion-instrumental-hiphop-256) to generate mel spectrograms of 256x256 corresponding to 5 seconds of audio. The audio consists of samples of instrumental Hip Hop music. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference.
|
Viiiz/mistral-finetuned-alpaca
|
Viiiz
| 2024-01-25T10:06:10Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-01-24T04:57:12Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
model-index:
- name: mistral-finetuned-alpaca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-finetuned-alpaca
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.1
|
teticio/audio-diffusion-breaks-256
|
teticio
| 2024-01-25T10:05:00Z | 15 | 4 |
diffusers
|
[
"diffusers",
"tensorboard",
"audio",
"spectrograms",
"dataset:teticio/audio-diffusion-breaks-256",
"diffusers:AudioDiffusionPipeline",
"region:us"
] | null | 2022-08-28T19:27:49Z |
---
tags:
- audio
- spectrograms
datasets:
- teticio/audio-diffusion-breaks-256
---
Denoising Diffusion Probabilistic Model trained on [teticio/audio-diffusion-breaks-256](https://huggingface.co/datasets/teticio/audio-diffusion-breaks-256) to generate mel spectrograms of 256x256 corresponding to 5 seconds of audio. The audio consists of 30,000 samples that have been used in music, sourced from [WhoSampled](https://whosampled.com) and [YouTube](https://youtube.com). The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference.
|
teticio/audio-diffusion-ddim-256
|
teticio
| 2024-01-25T10:04:00Z | 38 | 3 |
diffusers
|
[
"diffusers",
"tensorboard",
"audio",
"spectrograms",
"dataset:teticio/audio-diffusion-256",
"diffusers:AudioDiffusionPipeline",
"region:us"
] | null | 2022-10-22T05:46:58Z |
---
tags:
- audio
- spectrograms
datasets:
- teticio/audio-diffusion-256
---
De-noising Diffusion Implicit Model trained on teticio/audio-diffusion-256 to generate mel spectrograms of 256x256 corresponding to 5 seconds of audio. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference.
|
teticio/latent-audio-diffusion-ddim-256
|
teticio
| 2024-01-25T10:03:27Z | 27 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"audio",
"spectrograms",
"dataset:teticio/audio-diffusion-256",
"diffusers:AudioDiffusionPipeline",
"region:us"
] | null | 2022-11-05T14:11:26Z |
---
tags:
- audio
- spectrograms
datasets:
- teticio/audio-diffusion-256
---
Latent De-noising Diffusion Implicit Model trained on [teticio/audio-diffusion-256](https://huggingface.co/datasets/teticio/audio-diffusion-256) to generate mel spectrograms of 256x256 corresponding to 5 seconds of audio. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference.
|
teticio/audio-diffusion-256
|
teticio
| 2024-01-25T10:01:05Z | 1,648 | 6 |
diffusers
|
[
"diffusers",
"tensorboard",
"audio",
"spectrograms",
"dataset:teticio/audio-diffusion-256",
"diffusers:AudioDiffusionPipeline",
"region:us"
] | null | 2022-08-16T21:19:58Z |
---
tags:
- audio
- spectrograms
datasets:
- teticio/audio-diffusion-256
---
De-noising Diffusion Probabilistic Model trained on [teticio/audio-diffusion-256](https://huggingface.co/datasets/teticio/audio-diffusion-256) to generate mel spectrograms of 256x256 corresponding to 5 seconds of audio. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference.
|
1-13-am/xlm-roberta-base-finetuned-panx-de-fr
|
1-13-am
| 2024-01-25T09:54:16Z | 92 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-25T09:40:36Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1759
- F1: 0.8555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2831 | 1.0 | 835 | 0.1882 | 0.8223 |
| 0.1539 | 2.0 | 1670 | 0.1697 | 0.8405 |
| 0.0988 | 3.0 | 2505 | 0.1759 | 0.8555 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Shreyas0706/Zephyr-3B-Legal
|
Shreyas0706
| 2024-01-25T09:52:37Z | 118 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm_epoch",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-01-25T09:47:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kanh1/kanha-0.1-2.5-Mistral-7B
|
kanh1
| 2024-01-25T09:48:31Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T09:46:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
moduMM/sooyongkim230125
|
moduMM
| 2024-01-25T09:38:54Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:maywell/Synatra-7B-v0.3-dpo",
"base_model:adapter:maywell/Synatra-7B-v0.3-dpo",
"region:us"
] | null | 2024-01-25T08:21:03Z |
---
library_name: peft
base_model: maywell/Synatra-7B-v0.3-dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
dstefa/roberta-base_topic_classification_nyt_news
|
dstefa
| 2024-01-25T09:31:05Z | 35,463 | 7 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"topic",
"classification",
"news",
"dataset:dstefa/New_York_Times_Topics",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T20:26:22Z |
---
license: mit
base_model: roberta-base
tags:
- topic
- classification
- news
- roberta
metrics:
- accuracy
- f1
- precision
- recall
datasets:
- dstefa/New_York_Times_Topics
widget:
- text: >-
Olympic champion Kostas Kederis today left hospital ahead of his date with IOC inquisitors claiming his innocence and vowing.
example_title: Sports
- text: >-
Although many individuals are doing fever checks to screen for Covid-19, many Covid-19 patients never have a fever.
example_title: Health and Wellness
- text: >-
Twelve myths about Russia's War in Ukraine exposed
example_title: Crime
model-index:
- name: roberta-base_topic_classification_nyt_news
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: New_York_Times_Topics
type: News
metrics:
- type: F1
name: F1
value: 0.91
- type: accuracy
name: accuracy
value: 0.91
- type: precision
name: precision
value: 0.91
- type: recall
name: recall
value: 0.91
pipeline_tag: text-classification
---
# roberta-base_topic_classification_nyt_news
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the NYT News dataset, which contains 256,000 news titles from articles published from 2000 to the present (https://www.kaggle.com/datasets/aryansingh0909/nyt-articles-21m-2000-present).
It achieves the following results on the test set of 51200 cases:
- Accuracy: 0.91
- F1: 0.91
- Precision: 0.91
- Recall: 0.91
## Training data
Training data was classified as follow:
class |Description
-|-
0 |Sports
1 |Arts, Culture, and Entertainment
2 |Business and Finance
3 |Health and Wellness
4 |Lifestyle and Fashion
5 |Science and Technology
6 |Politics
7 |Crime
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.3192 | 1.0 | 20480 | 0.4078 | 0.8865 | 0.8859 | 0.8892 | 0.8865 |
| 0.2863 | 2.0 | 40960 | 0.4271 | 0.8972 | 0.8970 | 0.8982 | 0.8972 |
| 0.1979 | 3.0 | 61440 | 0.3797 | 0.9094 | 0.9092 | 0.9098 | 0.9094 |
| 0.1239 | 4.0 | 81920 | 0.3981 | 0.9117 | 0.9113 | 0.9114 | 0.9117 |
| 0.1472 | 5.0 | 102400 | 0.4033 | 0.9137 | 0.9135 | 0.9134 | 0.9137 |
### Model performance
-|precision|recall|f1|support
-|-|-|-|-
Sports|0.97|0.98|0.97|6400
Arts, Culture, and Entertainment|0.94|0.95|0.94|6400
Business and Finance|0.85|0.84|0.84|6400
Health and Wellness|0.90|0.93|0.91|6400
Lifestyle and Fashion|0.95|0.95|0.95|6400
Science and Technology|0.89|0.83|0.86|6400
Politics|0.93|0.88|0.90|6400
Crime|0.85|0.93|0.89|6400
| | | |
accuracy|||0.91|51200
macro avg|0.91|0.91|0.91|51200
weighted avg|0.91|0.91|0.91|51200
### How to use roberta-base_topic_classification_nyt_news with HuggingFace
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("dstefa/roberta-base_topic_classification_nyt_news")
model = AutoModelForSequenceClassification.from_pretrained("dstefa/roberta-base_topic_classification_nyt_news")
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer)
text = "Kederis proclaims innocence Olympic champion Kostas Kederis today left hospital ahead of his date with IOC inquisitors claiming his innocence and vowing."
pipe(text)
[{'label': 'Sports', 'score': 0.9989326596260071}]
```
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
HatimF/bartL_3
|
HatimF
| 2024-01-25T09:27:37Z | 92 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large",
"base_model:finetune:facebook/bart-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-25T09:17:08Z |
---
license: apache-2.0
base_model: facebook/bart-large
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bartL_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bartL_3
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8209
- Rouge1: 0.1782
- Rouge2: 0.0368
- Rougel: 0.1349
- Rougelsum: 0.1349
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 1515
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 3.283 | 1.0 | 317 | 2.7342 | 0.1742 | 0.0364 | 0.128 | 0.1283 | 20.0 |
| 2.6366 | 2.0 | 634 | 2.7466 | 0.1838 | 0.0448 | 0.139 | 0.1394 | 20.0 |
| 2.2437 | 3.0 | 951 | 2.7819 | 0.1691 | 0.0374 | 0.1277 | 0.1278 | 20.0 |
| 1.9957 | 4.0 | 1268 | 2.8209 | 0.1782 | 0.0368 | 0.1349 | 0.1349 | 20.0 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.1
|
vierlinglukas/q-FrozenLake-v1-4x4-noSlippery
|
vierlinglukas
| 2024-01-25T09:23:15Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-25T09:23:11Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="vierlinglukas/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Bluepearl/Random-Forest-Classification
|
Bluepearl
| 2024-01-25T09:23:05Z | 4 | 1 |
transformers
|
[
"transformers",
"random-forest",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T08:06:38Z |
# Introduction
TODO: Give a short introduction of your project. Let this section explain the objectives or the motivation behind this project.
# Getting Started
TODO: Guide users through getting your code up and running on their own system. In this section you can talk about:
1. Installation process
2. Software dependencies
3. Latest releases
4. API references
# Build and Test
TODO: Describe and show how to build your code and run the tests.
# Contribute
TODO: Explain how other users and developers can contribute to make your code better.
If you want to learn more about creating good readme files then refer the following [guidelines](https://docs.microsoft.com/en-us/azure/devops/repos/git/create-a-readme?view=azure-devops). You can also seek inspiration from the below readme files:
- [ASP.NET Core](https://github.com/aspnet/Home)
- [Visual Studio Code](https://github.com/Microsoft/vscode)
- [Chakra Core](https://github.com/Microsoft/ChakraCore)
|
e22vvb/EN_t5-small_15_spider
|
e22vvb
| 2024-01-25T09:23:02Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-25T08:51:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: EN_t5-small_15_spider
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EN_t5-small_15_spider
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4888
- Rouge2 Precision: 0.5237
- Rouge2 Recall: 0.3349
- Rouge2 Fmeasure: 0.3832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| No log | 1.0 | 438 | 0.4960 | 0.3036 | 0.2109 | 0.2216 |
| 1.0347 | 2.0 | 876 | 0.4496 | 0.3801 | 0.2453 | 0.2721 |
| 0.3664 | 3.0 | 1314 | 0.4839 | 0.4223 | 0.2627 | 0.2999 |
| 0.2966 | 4.0 | 1752 | 0.4671 | 0.457 | 0.2786 | 0.3218 |
| 0.2486 | 5.0 | 2190 | 0.4586 | 0.4628 | 0.2872 | 0.3302 |
| 0.2254 | 6.0 | 2628 | 0.4475 | 0.4873 | 0.3071 | 0.3521 |
| 0.2061 | 7.0 | 3066 | 0.4932 | 0.5017 | 0.3147 | 0.3622 |
| 0.1915 | 8.0 | 3504 | 0.4904 | 0.52 | 0.3331 | 0.3812 |
| 0.1915 | 9.0 | 3942 | 0.4924 | 0.5069 | 0.3189 | 0.3668 |
| 0.179 | 10.0 | 4380 | 0.4941 | 0.5094 | 0.3253 | 0.3721 |
| 0.1714 | 11.0 | 4818 | 0.4865 | 0.5064 | 0.3201 | 0.3672 |
| 0.1655 | 12.0 | 5256 | 0.4825 | 0.5147 | 0.3289 | 0.3762 |
| 0.1604 | 13.0 | 5694 | 0.4730 | 0.5155 | 0.3306 | 0.3776 |
| 0.1584 | 14.0 | 6132 | 0.4873 | 0.5239 | 0.336 | 0.384 |
| 0.1563 | 15.0 | 6570 | 0.4888 | 0.5237 | 0.3349 | 0.3832 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.7.dev0
- Tokenizers 0.13.3
|
janhq/llamacorn-1.1b-chat-GGUF
|
janhq
| 2024-01-25T09:17:08Z | 913 | 1 | null |
[
"gguf",
"trl",
"dpo",
"generated_from_trainer",
"base_model:jan-hq/LlamaCorn-1.1B-Chat",
"base_model:quantized:jan-hq/LlamaCorn-1.1B-Chat",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-01-25T09:15:50Z |
---
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: LlamaCorn-1.1B-Chat
results: []
base_model: jan-hq/LlamaCorn-1.1B-Chat
model_creator: jan-hq
model_name: LlamaCorn-1.1B-Chat
quantized_by: JanHQ
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Model Description
This is a GGUF version of [jan-hq/LlamaCorn-1.1B-Chat](https://huggingface.co/jan-hq/LlamaCorn-1.1B-Chat)
- Model creator: [jan-hq](https://huggingface.co/jan-hq)
- Original model: [LlamaCorn-1.1B-Chat](https://huggingface.co/jan-hq/LlamaCorn-1.1B-Chat)
- Model description: [Readme](https://huggingface.co/jan-hq/LlamaCorn-1.1B-Chat/blob/main/README.md)
# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# Jan Model Converter
This is a repository for the [open-source converter](https://github.com/janhq/model-converter. We would be grateful if the community could contribute and strengthen this repository. We are aiming to expand the repo that can convert into various types of format
|
Andyrasika/bart_tech_keywords
|
Andyrasika
| 2024-01-25T09:16:53Z | 91 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large",
"base_model:finetune:facebook/bart-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-25T09:15:49Z |
---
license: apache-2.0
base_model: facebook/bart-large
tags:
- generated_from_trainer
model-index:
- name: bart_tech_keywords
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart_tech_keywords
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4575 | 0.44 | 50 | 1.2105 |
| 1.2002 | 0.89 | 100 | 1.0730 |
| 1.0252 | 1.33 | 150 | 1.1039 |
| 0.9536 | 1.78 | 200 | 0.9614 |
| 0.9017 | 2.22 | 250 | 0.8853 |
| 0.8827 | 2.67 | 300 | 0.8916 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Taphu/chatbot_mistral7b
|
Taphu
| 2024-01-25T09:11:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T08:59:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
liminerity/Mem-Beagle-7b-slerp-v6
|
liminerity
| 2024-01-25T09:04:02Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/Mem-Beagle-7b-slerp-v2",
"liminerity/Mem-Beagle-7b-slerp-v3",
"base_model:limin-arc/Mem-Beagle-7b-slerp-v2",
"base_model:merge:limin-arc/Mem-Beagle-7b-slerp-v2",
"base_model:limin-arc/Mem-Beagle-7b-slerp-v3",
"base_model:merge:limin-arc/Mem-Beagle-7b-slerp-v3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T08:55:56Z |
---
tags:
- merge
- mergekit
- lazymergekit
- liminerity/Mem-Beagle-7b-slerp-v2
- liminerity/Mem-Beagle-7b-slerp-v3
base_model:
- liminerity/Mem-Beagle-7b-slerp-v2
- liminerity/Mem-Beagle-7b-slerp-v3
---
# Mem-Beagle-7b-slerp-v6
Mem-Beagle-7b-slerp-v6 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/Mem-Beagle-7b-slerp-v2](https://huggingface.co/liminerity/Mem-Beagle-7b-slerp-v2)
* [liminerity/Mem-Beagle-7b-slerp-v3](https://huggingface.co/liminerity/Mem-Beagle-7b-slerp-v3)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: liminerity/Mem-Beagle-7b-slerp-v2
layer_range: [0, 32]
- model: liminerity/Mem-Beagle-7b-slerp-v3
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/Mem-Beagle-7b-slerp-v2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "liminerity/Mem-Beagle-7b-slerp-v6"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Bossixd/roberta-base-finetuned-imdb
|
Bossixd
| 2024-01-25T09:02:04Z | 55 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-25T08:54:43Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_keras_callback
model-index:
- name: Bossixd/roberta-base-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Bossixd/roberta-base-finetuned-imdb
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.7601
- Validation Loss: 1.5761
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.7601 | 1.5761 | 0 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
hojzas/setfit-proj8-code
|
hojzas
| 2024-01-25T09:02:01Z | 47 | 0 |
setfit
|
[
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"dataset:hojzas/proj8-label2",
"arxiv:2209.11055",
"base_model:flax-sentence-embeddings/st-codesearch-distilroberta-base",
"base_model:finetune:flax-sentence-embeddings/st-codesearch-distilroberta-base",
"co2_eq_emissions",
"region:us"
] |
text-classification
| 2024-01-25T09:01:43Z |
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
datasets:
- hojzas/proj8-label2
metrics:
- accuracy
widget:
- text: 'def first_with_given_key(iterable, key=lambda x: x):\n keys_used = {}\n for
item in iterable:\n rp = repr(key(item))\n if rp not in keys_used.keys():\n keys_used[rp]
= repr(item)\n yield item'
- text: 'def first_with_given_key(iterable, key=lambda x: x):\n keys=[]\n for
i in iterable:\n if key(i) not in keys:\n yield i\n keys.append(key(i))'
- text: 'def first_with_given_key(iterable, key=repr):\n set_of_keys = set()\n lambda_key
= (lambda x: key(x))\n for item in iterable:\n key = lambda_key(item)\n try:\n key_for_set
= hash(key)\n except TypeError:\n key_for_set = repr(key)\n if
key_for_set in set_of_keys:\n continue\n set_of_keys.add(key_for_set)\n yield
item'
- text: 'def first_with_given_key(iterable, key = lambda x: x):\n found_keys={}\n for
i in iterable:\n if key(i) not in found_keys.keys():\n found_keys[key(i)]=i\n yield
i'
- text: 'def first_with_given_key(the_iterable, key=lambda x: x):\n temp_keys=[]\n for
i in range(len(the_iterable)):\n if (key(the_iterable[i]) not in temp_keys):\n temp_keys.append(key(the_iterable[i]))\n yield
the_iterable[i]\n del temp_keys'
pipeline_tag: text-classification
inference: true
co2_eq_emissions:
emissions: 0.11699550964562933
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz
ram_total_size: 251.49160385131836
hours_used: 0.002
base_model: flax-sentence-embeddings/st-codesearch-distilroberta-base
---
# SetFit with flax-sentence-embeddings/st-codesearch-distilroberta-base
This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [hojzas/proj8-label2](https://huggingface.co/datasets/hojzas/proj8-label2) dataset that can be used for Text Classification. This SetFit model uses [flax-sentence-embeddings/st-codesearch-distilroberta-base](https://huggingface.co/flax-sentence-embeddings/st-codesearch-distilroberta-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [flax-sentence-embeddings/st-codesearch-distilroberta-base](https://huggingface.co/flax-sentence-embeddings/st-codesearch-distilroberta-base)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 2 classes
- **Training Dataset:** [hojzas/proj8-label2](https://huggingface.co/datasets/hojzas/proj8-label2)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>'def first_with_given_key(iterable, key=lambda x: x):\\n keys_in_list = []\\n for it in iterable:\\n if key(it) not in keys_in_list:\\n keys_in_list.append(key(it))\\n yield it'</li><li>'def first_with_given_key(iterable, key=lambda value: value):\\n it = iter(iterable)\\n saved_keys = []\\n while True:\\n try:\\n value = next(it)\\n if key(value) not in saved_keys:\\n saved_keys.append(key(value))\\n yield value\\n except StopIteration:\\n break'</li><li>'def first_with_given_key(iterable, key=None):\\n if key is None:\\n key = lambda x: x\\n item_list = []\\n key_set = set()\\n for item in iterable:\\n generated_item = key(item)\\n if generated_item not in item_list:\\n item_list.append(generated_item)\\n yield item'</li></ul> |
| 1 | <ul><li>'def first_with_given_key(lst, key = lambda x: x):\\n res = set()\\n for i in lst:\\n if repr(key(i)) not in res:\\n res.add(repr(key(i)))\\n yield i'</li><li>'def first_with_given_key(iterable, key=repr):\\n set_of_keys = set()\\n lambda_key = (lambda x: key(x))\\n for item in iterable:\\n key = lambda_key(item)\\n try:\\n key_for_set = hash(key)\\n except TypeError:\\n key_for_set = repr(key)\\n if key_for_set in set_of_keys:\\n continue\\n set_of_keys.add(key_for_set)\\n yield item'</li><li>'def first_with_given_key(iterable, key=None):\\n if key is None:\\n key = identity\\n appeared_keys = set()\\n for item in iterable:\\n generated_key = key(item)\\n if not generated_key.__hash__:\\n generated_key = repr(generated_key)\\n if generated_key not in appeared_keys:\\n appeared_keys.add(generated_key)\\n yield item'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("hojzas/setfit-proj8-code")
# Run inference
preds = model("def first_with_given_key(iterable, key=lambda x: x):\n keys=[]\n for i in iterable:\n if key(i) not in keys:\n yield i\n keys.append(key(i))")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 43 | 90.28 | 119 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 20 |
| 1 | 5 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0159 | 1 | 0.3347 | - |
| 0.7937 | 50 | 0.0035 | - |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Carbon Emitted**: 0.000 kg of CO2
- **Hours Used**: 0.002 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: No GPU used
- **CPU Model**: Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz
- **RAM Size**: 251.49 GB
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.2.2
- Transformers: 4.36.1
- PyTorch: 2.1.2+cu121
- Datasets: 2.14.7
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
alnrg2arg/blockchainlabs_7B_merged_test2_4_sft_lora_DPO_orca2_truthy
|
alnrg2arg
| 2024-01-25T08:46:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T08:46:36Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mIslamsatti/abc
|
mIslamsatti
| 2024-01-25T08:37:58Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2024-01-25T08:37:58Z |
---
license: other
license_name: muhammadislam
license_link: LICENSE
---
|
liminerity/Mem-Beagle-7b-slerp-v4
|
liminerity
| 2024-01-25T08:30:49Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/Mem-Beagle-7b-slerp-v3",
"liminerity/Mem-Beagle-7b-slerp-v2",
"base_model:limin-arc/Mem-Beagle-7b-slerp-v2",
"base_model:merge:limin-arc/Mem-Beagle-7b-slerp-v2",
"base_model:limin-arc/Mem-Beagle-7b-slerp-v3",
"base_model:merge:limin-arc/Mem-Beagle-7b-slerp-v3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T08:10:42Z |
---
tags:
- merge
- mergekit
- lazymergekit
- liminerity/Mem-Beagle-7b-slerp-v3
- liminerity/Mem-Beagle-7b-slerp-v2
base_model:
- liminerity/Mem-Beagle-7b-slerp-v3
- liminerity/Mem-Beagle-7b-slerp-v2
---
# Mem-Beagle-7b-slerp-v4
Mem-Beagle-7b-slerp-v4 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/Mem-Beagle-7b-slerp-v3](https://huggingface.co/liminerity/Mem-Beagle-7b-slerp-v3)
* [liminerity/Mem-Beagle-7b-slerp-v2](https://huggingface.co/liminerity/Mem-Beagle-7b-slerp-v2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: liminerity/Mem-Beagle-7b-slerp-v3
layer_range: [0, 32]
- model: liminerity/Mem-Beagle-7b-slerp-v2
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/Mem-Beagle-7b-slerp-v3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "liminerity/Mem-Beagle-7b-slerp-v4"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Redhotchilipoppy/MontelClassifier
|
Redhotchilipoppy
| 2024-01-25T08:28:59Z | 90 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-classification",
"generated_from_trainer",
"base_model:Redhotchilipoppy/montelnewsspeaker",
"base_model:finetune:Redhotchilipoppy/montelnewsspeaker",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-22T13:10:45Z |
---
license: apache-2.0
base_model: Redhotchilipoppy/montelnewsspeaker
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: MontelClassifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MontelClassifier
This model is a fine-tuned version of [Redhotchilipoppy/montelnewsspeaker](https://huggingface.co/Redhotchilipoppy/montelnewsspeaker) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6256
- Accuracy: 0.7296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 116 | 1.4596 | 0.5880 |
| No log | 2.0 | 232 | 1.1372 | 0.7039 |
| No log | 3.0 | 348 | 1.0803 | 0.6910 |
| No log | 4.0 | 464 | 1.1075 | 0.7253 |
| 1.2586 | 5.0 | 580 | 1.1247 | 0.7253 |
| 1.2586 | 6.0 | 696 | 1.2004 | 0.7253 |
| 1.2586 | 7.0 | 812 | 1.2889 | 0.7253 |
| 1.2586 | 8.0 | 928 | 1.3423 | 0.7253 |
| 0.3598 | 9.0 | 1044 | 1.5117 | 0.7425 |
| 0.3598 | 10.0 | 1160 | 1.4968 | 0.7425 |
| 0.3598 | 11.0 | 1276 | 1.4889 | 0.7253 |
| 0.3598 | 12.0 | 1392 | 1.5606 | 0.7296 |
| 0.0958 | 13.0 | 1508 | 1.6033 | 0.7339 |
| 0.0958 | 14.0 | 1624 | 1.6221 | 0.7339 |
| 0.0958 | 15.0 | 1740 | 1.6256 | 0.7296 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
alnrg2arg/blockchainlabs_7B_merged_test2_4_sft_fp16_DPO_orca2_truthy
|
alnrg2arg
| 2024-01-25T08:21:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T08:21:40Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sujayC66/en_spacy_small_sectors_ner
|
sujayC66
| 2024-01-25T08:16:03Z | 2 | 0 |
spacy
|
[
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] |
token-classification
| 2024-01-25T08:15:11Z |
---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_spacy_small_sectors_ner
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.3921568627
- name: NER Recall
type: recall
value: 0.3125
- name: NER F Score
type: f_score
value: 0.347826087
---
| Feature | Description |
| --- | --- |
| **Name** | `en_spacy_small_sectors_ner` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.6.1,<3.7.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (1 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `SECTOR` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 34.78 |
| `ENTS_P` | 39.22 |
| `ENTS_R` | 31.25 |
| `TRANSFORMER_LOSS` | 4057.39 |
| `NER_LOSS` | 48546.58 |
|
LiheYoung/depth-anything-large-hf
|
LiheYoung
| 2024-01-25T08:13:11Z | 154,690 | 41 |
transformers
|
[
"transformers",
"safetensors",
"depth_anything",
"depth-estimation",
"vision",
"arxiv:2401.10891",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
depth-estimation
| 2024-01-23T17:38:56Z |
---
license: apache-2.0
tags:
- vision
pipeline_tag: depth-estimation
widget:
- inference: false
---
# Depth Anything (large-sized model, Transformers version)
Depth Anything model. It was introduced in the paper [Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data](https://arxiv.org/abs/2401.10891) by Lihe Yang et al. and first released in [this repository](https://github.com/LiheYoung/Depth-Anything).
[Online demo](https://huggingface.co/spaces/LiheYoung/Depth-Anything) is also provided.
Disclaimer: The team releasing Depth Anything did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Depth Anything leverages the [DPT](https://huggingface.co/docs/transformers/model_doc/dpt) architecture with a [DINOv2](https://huggingface.co/docs/transformers/model_doc/dinov2) backbone.
The model is trained on ~62 million images, obtaining state-of-the-art results for both relative and absolute depth estimation.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/depth_anything_overview.jpg"
alt="drawing" width="600"/>
<small> Depth Anything overview. Taken from the <a href="https://arxiv.org/abs/2401.10891">original paper</a>.</small>
## Intended uses & limitations
You can use the raw model for tasks like zero-shot depth estimation. See the [model hub](https://huggingface.co/models?search=depth-anything) to look for
other versions on a task that interests you.
### How to use
Here is how to use this model to perform zero-shot depth estimation:
```python
from transformers import pipeline
from PIL import Image
import requests
# load pipe
pipe = pipeline(task="depth-estimation", model="LiheYoung/depth-anything-large-hf")
# load image
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
# inference
depth = pipe(image)["depth"]
```
Alternatively, one can use the classes themselves:
```python
from transformers import AutoImageProcessor, AutoModelForDepthEstimation
import torch
import numpy as np
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("LiheYoung/depth-anything-large-hf")
model = AutoModelForDepthEstimation.from_pretrained("LiheYoung/depth-anything-large-hf")
# prepare image for the model
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predicted_depth = outputs.predicted_depth
# interpolate to original size
prediction = torch.nn.functional.interpolate(
predicted_depth.unsqueeze(1),
size=image.size[::-1],
mode="bicubic",
align_corners=False,
)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/depth_anything.html#).
### BibTeX entry and citation info
```bibtex
@misc{yang2024depth,
title={Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data},
author={Lihe Yang and Bingyi Kang and Zilong Huang and Xiaogang Xu and Jiashi Feng and Hengshuang Zhao},
year={2024},
eprint={2401.10891},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
LiheYoung/depth_anything_vitb14
|
LiheYoung
| 2024-01-25T08:10:55Z | 1,213 | 3 |
transformers
|
[
"transformers",
"pytorch",
"depth_anything",
"depth-estimation",
"arxiv:2401.10891",
"endpoints_compatible",
"region:us"
] |
depth-estimation
| 2024-01-23T07:30:13Z |
---
tags:
- depth_anything
- depth-estimation
---
# Depth Anything model, base
The model card for our paper [Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data](https://arxiv.org/abs/2401.10891).
You may also try our [demo](https://huggingface.co/spaces/LiheYoung/Depth-Anything) and visit our [project page](https://depth-anything.github.io/).
## Installation
First, install the Depth Anything package:
```
git clone https://github.com/LiheYoung/Depth-Anything
cd Depth-Anything
pip install -r requirements.txt
```
## Usage
Here's how to run the model:
```python
import numpy as np
from PIL import Image
import cv2
import torch
from depth_anything.dpt import DepthAnything
from depth_anything.util.transform import Resize, NormalizeImage, PrepareForNet
from torchvision.transforms import Compose
model = DepthAnything.from_pretrained("LiheYoung/depth_anything_vitb14")
transform = Compose([
Resize(
width=518,
height=518,
resize_target=False,
keep_aspect_ratio=True,
ensure_multiple_of=14,
resize_method='lower_bound',
image_interpolation_method=cv2.INTER_CUBIC,
),
NormalizeImage(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
PrepareForNet(),
])
image = Image.open("...")
image = np.array(image) / 255.0
image = transform({'image': image})['image']
image = torch.from_numpy(image).unsqueeze(0)
depth = model(image)
```
|
LiheYoung/depth_anything_vitl14
|
LiheYoung
| 2024-01-25T08:07:57Z | 37,246 | 40 |
transformers
|
[
"transformers",
"pytorch",
"depth_anything",
"depth-estimation",
"arxiv:2401.10891",
"endpoints_compatible",
"region:us"
] |
depth-estimation
| 2024-01-23T07:33:54Z |
---
tags:
- depth_anything
- depth-estimation
---
# Depth Anything model, large
The model card for our paper [Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data](https://arxiv.org/abs/2401.10891).
You may also try our [demo](https://huggingface.co/spaces/LiheYoung/Depth-Anything) and visit our [project page](https://depth-anything.github.io/).
## Installation
First, install the Depth Anything package:
```
git clone https://github.com/LiheYoung/Depth-Anything
cd Depth-Anything
pip install -r requirements.txt
```
## Usage
Here's how to run the model:
```python
import numpy as np
from PIL import Image
import cv2
import torch
from depth_anything.dpt import DepthAnything
from depth_anything.util.transform import Resize, NormalizeImage, PrepareForNet
from torchvision.transforms import Compose
model = DepthAnything.from_pretrained("LiheYoung/depth_anything_vitl14")
transform = Compose([
Resize(
width=518,
height=518,
resize_target=False,
keep_aspect_ratio=True,
ensure_multiple_of=14,
resize_method='lower_bound',
image_interpolation_method=cv2.INTER_CUBIC,
),
NormalizeImage(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
PrepareForNet(),
])
image = Image.open("...")
image = np.array(image) / 255.0
image = transform({'image': image})['image']
image = torch.from_numpy(image).unsqueeze(0)
depth = model(image)
```
|
stabilityai/japanese-stablelm-base-gamma-7b
|
stabilityai
| 2024-01-25T08:05:12Z | 2,056 | 24 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"japanese-stablelm",
"causal-lm",
"ja",
"dataset:wikipedia",
"dataset:mc4",
"dataset:cc100",
"dataset:oscar-corpus/OSCAR-2301",
"dataset:oscar-corpus/OSCAR-2201",
"dataset:cerebras/SlimPajama-627B",
"arxiv:2310.06825",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-16T08:15:14Z |
---
license: apache-2.0
tags:
- japanese-stablelm
- causal-lm
pipeline_tag: text-generation
datasets:
- wikipedia
- mc4
- cc100
- oscar-corpus/OSCAR-2301
- oscar-corpus/OSCAR-2201
- cerebras/SlimPajama-627B
language:
- ja
extra_gated_fields:
Name: text
Email: text
Country: text
Organization or Affiliation: text
I allow Stability AI to contact me about information related to its models and research: checkbox
---
# Japanese Stable LM Base Gamma 7B
## Model Description
This is a 7B-parameter decoder-only language model with a focus on maximizing Japanese language modeling performance and Japanese downstream task performance.
We conducted continued pretraining using Japanese data on the English language model, [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1), to transfer the model's knowledge and capabilities to Japanese.
*If you are looking for an instruction-following model, check [Japanese Stable LM Instruct Gamma 7B](https://huggingface.co/stabilityai/japanese-stablelm-instruct-gamma-7b)*.
*If you are in search of a smaller model, please check [Japanese StableLM-3B-4E1T Base](https://huggingface.co/stabilityai/japanese-stablelm-3b-4e1t-base).*
## Usage
Ensure you are using Transformers 4.34.0 or newer.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stabilityai/japanese-stablelm-base-gamma-7b")
model = AutoModelForCausalLM.from_pretrained(
"stabilityai/japanese-stablelm-base-gamma-7b",
torch_dtype="auto",
)
model.cuda()
inputs = tokenizer("AI で科学研究を加速するには、", return_tensors="pt").to("cuda")
tokens = model.generate(
**inputs,
max_new_tokens=64,
temperature=0.75,
top_p=0.95,
do_sample=True,
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
```
## Model Details
* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: `Japanese Stable LM Base Gamma 7B` model is an auto-regressive language model based on the transformer decoder architecture.
* **Language(s)**: Japanese
* **License**: This model is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
* **Contact**: For questions and comments about the model, please join [Stable Community Japan](https://discord.gg/StableJP). For future announcements / information about Stability AI models, research, and events, please follow https://twitter.com/StabilityAI_JP.
### Model Architecture
For details, please see Mistral AI's [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
### Training Dataset
Around 100B tokens from a mixture of the following corpora were used for the continued pretraining.
- [Japanese/English Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
- [Japanese mc4](https://huggingface.co/datasets/mc4)
- [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz)
- [Japanese OSCAR](https://oscar-project.github.io/documentation/)
- [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B) without the Books3 subset
## Use and Limitations
### Intended Use
The model is intended to be used by all individuals as a foundational model for application-specific fine-tuning without strict limitations on commercial use.
### Limitations and bias
The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model-generated text. We recommend users exercise reasonable caution when using these models in production systems. Do not use the model for any applications that may cause harm or distress to individuals or groups.
## Credits
The continued pre-training was carried out by [Takuya Akiba](https://huggingface.co/iwiwi).
Other aspects, including data preparation and evaluation, were handled by the Language Team of Stability AI Japan, notably [Meng Lee](https://huggingface.co/leemeng), [Fujiki Nakamura](https://huggingface.co/fujiki), [Makoto Shing](https://huggingface.co/mkshing), [Paul McCann](https://huggingface.co/polm-stability), and [Naoki Orii](https://huggingface.co/mrorii).
## Acknowledgements
This model is based on Mistral-7B-v0.1 released by the Mistral AI team. We are grateful to the Mistral AI team for providing such an excellent base model.
We are grateful for the contributions of the EleutherAI Polyglot-JA team in helping us to collect a large amount of pre-training data in Japanese. Polyglot-JA members includes Hyunwoong Ko (Project Lead), Fujiki Nakamura (originally started this project when he commited to the Polyglot team), Yunho Mo, Minji Jung, KeunSeok Im, and Su-Kyeong Jang.
We are also appreciative of [AI Novelist/Sta (Bit192, Inc.)](https://ai-novel.com/index.php) and the numerous contributors from [Stable Community Japan](https://discord.gg/VPrcE475HB) for assisting us in gathering a large amount of high-quality Japanese textual data for model training.
|
minchyeom/MemGPT
|
minchyeom
| 2024-01-25T08:05:01Z | 22 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"memgpt",
"function",
"function calling",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T03:23:34Z |
---
language:
- en
license: apache-2.0
tags:
- memgpt
- function
- function calling
---
# Check out my newest model, [MemGPT-3](https://huggingface.co/starsnatched/MemGPT-3)!
# Model Description
This repo contains a 7 billion parameter Language Model fine tuned from [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2). This model is specifically designed for function calling in [MemGPT](https://memgpt.ai/). It demonstrates comparable performances to GPT-4 when it comes to working with MemGPT.
~~The original model has been quantized to Q8_0, using [llama.cpp](https://github.com/ggerganov/llama.cpp) for better inference speed (original unquantized model coming soon).~~
# Key Features
* Function calling
* Dedicated to working with MemGPT
* Supports medium context, trained with Sequences up to 8,192
# Usage
This model is designed to be ran on various backends, such as [oogabooga's WebUI](https://github.com/oobabooga/text-generation-webui), or llama.cpp.
To run the model on WebUI, simply `git clone` the official WebUI repository, and run the appropriate script for your operating system. More details [here](https://github.com/oobabooga/text-generation-webui?tab=readme-ov-file#how-to-install).
Once you've installed WebUI, you can then download this model at the `model` tab. Next, choose the desired model (starsnatched/MemGPT in this case), and you're good to go for the backend.
When you have WebUI or your desired backend running, you can open a terminal/powershell, and install MemGPT using `pip3 install -U pymemgpt`. Configure your MemGPT using `memgpt configure` before running MemGPT.
Use `memgpt run --no-verify` to chat with MemGPT!
# Prompt Template
This model uses a custom chat template. We're naming this template "memgpt". This can be configured via `memgpt configure`. Currently, it is recommended to be used with `airoboros` wrapper.
```
<turn> SYSTEM: You are MemGPT. </turn>
<turn> SYSTEM_ALERT: {'message': 'You have 50 prior messages with the user. These messages have been hidden from view due to memory constraints, but you can view them using functions.', 'time': '2023-10-07 11:50:03 PM PDT-0700'} </turn>
<turn> ASSISTANT: Bootup sequence complete. Persona activated. Testing messaging functionality. </turn>
<turn> FUNCTION CALL: {'name': 'send_message', 'arguments': '{\n ""message"": ""More human than human is our motto.""\n}'}<turn> FUNCTION: {""status"": ""OK"", ""message"": null, ""time"": ""2023-10-07 11:50:03 PM PDT-0700""} </turn>
<turn> LOGIN: {'last_login': '7 days 8 hours ago', 'time': '2023-10-07 11:50:03 PM PDT-0700'} </turn>
<turn> USER_MESSAGE: {'message': 'Hey, remember that time we talked about music? What was the artist you mentioned you could get into?', 'time': '2023-10-07 11:50:03 PM PDT-0700'} </turn>
<turn> ASSISTANT: Alright, let me search our conversation history about music to find the artist. </turn>
<turn> FUNCTION CALL: {'name': 'conversation_search', 'arguments': '{\n ""query"": ""music"",\n ""request_heartbeat"": true,\n ""page"": 0\n}'}<turn> FUNCTION: {""status"": ""OK"", ""message"": ""Showing 5 of 50 results (page 0/9): [\""timestamp: 2023-10-07 11:50:03 PM PDT-0700, user - I agree. Working out is a great way to burn off steam. Do you like country music?\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, assistant - A little bit. I can get into taylor swift.\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, user - Yeah between the both of us, could probably get a country song written about our distresses...perhaps Taylor Swift could perform it.\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, assistant - That would be a chart-topper, I'm sure. Something to blast while driving with the top down, maybe even in your next restoration project.\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, assistant - This sounds fun. Do you have any other hobby?\""]"", ""time"": ""2023-10-07 11:50:08 PM PDT-0700""} </turn>
<turn> HEARTBEAT: {'reason': 'request_heartbeat == true', 'time': '2023-10-07 11:50:08 PM PDT-0700'} </turn>
<turn> ASSISTANT: Ah, I found it. It was Taylor Swift that I mentioned I could get into. Do you still listen to her music, or have you found any new artists that you enjoy? </turn>
<turn> FUNCTION CALL: {'name': 'send_message', 'arguments': '{\n ""message"": ""Ah, I found it. It was Taylor Swift that I mentioned I could get into. Do you still listen to her music, or have you found any new artists that you enjoy?""\n}'}<turn> FUNCTION: {""status"": ""OK"", ""message"": null, ""time"": ""2023-10-07 11:50:14 PM PDT-0700""} </turn>
```
# Model Details
* Developed by: @starsnatched
* Model type: This repo contains a language model based on the transformer decoder architecture.
* Language: English
* Contact: For any questions, concerns or comments about this model, please contact me at Discord, @starsnatched.
# Training Infrastructure
* Hardware: The model in this repo was trained on 2x A100 80GB GPUs.
# Intended Use
The model is designed to be used as the base model for MemGPT agents.
# Limitations and Bias
The model may exhibit unreliable, unsafe, or biased behaviours. Please double check the results this model may produce.
|
kingabzpro/phi-2-role-play
|
kingabzpro
| 2024-01-25T08:02:16Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"pretrained",
"lora",
"qlora",
"4bit",
"phi-2",
"generated_from_trainer",
"text-generation",
"en",
"dataset:hieunguyenminh/roleplay",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-24T20:53:45Z |
---
license: apache-2.0
library_name: peft
tags:
- pretrained
- lora
- qlora
- 4bit
- phi-2
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: phi-2-role-play
results: []
datasets:
- hieunguyenminh/roleplay
language:
- en
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-role-play
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results

### Framework versions
- PEFT 0.7.1
- Transformers 4.37.1
- Pytorch 2.0.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
e22vvb/EN_t5-small_10_wikiSQL
|
e22vvb
| 2024-01-25T07:59:20Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wikisql",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-25T05:46:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikisql
model-index:
- name: EN_t5-small_10_wikiSQL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EN_t5-small_10_wikiSQL
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikisql dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1096
- Rouge2 Precision: 0.8329
- Rouge2 Recall: 0.741
- Rouge2 Fmeasure: 0.7772
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.1933 | 1.0 | 4049 | 0.1549 | 0.7965 | 0.7075 | 0.7423 |
| 0.161 | 2.0 | 8098 | 0.1345 | 0.8123 | 0.7211 | 0.7569 |
| 0.1459 | 3.0 | 12147 | 0.1256 | 0.8165 | 0.7253 | 0.7611 |
| 0.1371 | 4.0 | 16196 | 0.1194 | 0.8236 | 0.7321 | 0.7681 |
| 0.1293 | 5.0 | 20245 | 0.1159 | 0.8275 | 0.7355 | 0.7718 |
| 0.1243 | 6.0 | 24294 | 0.1135 | 0.8283 | 0.7356 | 0.7722 |
| 0.1245 | 7.0 | 28343 | 0.1116 | 0.831 | 0.7383 | 0.7748 |
| 0.1167 | 8.0 | 32392 | 0.1104 | 0.8325 | 0.7401 | 0.7765 |
| 0.1134 | 9.0 | 36441 | 0.1097 | 0.8325 | 0.7408 | 0.7769 |
| 0.1131 | 10.0 | 40490 | 0.1096 | 0.8329 | 0.741 | 0.7772 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.7.dev0
- Tokenizers 0.13.3
|
AIFT/AIFT-ko-orca-plat-Yi-ko-6b-v1.2-dpo-2
|
AIFT
| 2024-01-25T07:54:44Z | 62 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T07:17:06Z |
---
license: cc-by-sa-4.0
---
<h1>orca-platypus - instruct-dpo-2 모델 v1.2</h1>
<b><학습 데이터 구축></b>
kyujinpy 님이 공개하신 KOR-OpenOrca-Platypus 데이터를 일부 삭제(샘플링) 및 정제 작업 진행하여 활용.
그 이후 해당 데이터들을 보며 관련 태스크를 추출하였고 이를 기반으로
해당 태스크에 맞춰서 NLP 관련 오픈소스 데이터를 활용하여 학습데이터를 자체적으로
역사, 과학, 수학, 기계독해, 리뷰 분석 문제를 gpt를 통해서 구축하였고,
aihub 일반상식 및 기계독해 데이터를 활용하여 추가로 학습 데이터를 구축(형태소 관련, 기계독해 관련 및 요약)
각종 블로그에서 역사 및 상식 퀴즈를 사람이 직접 학습데이터 형태로 변경
AI2AI Challenge 데이터 형태를 보고 gpt를 통해 초등 수준의 과학 수학 문제 유형을 제작 500문제
영어 번역 데이터 영한/한영 데이터 학습 데이터로 활용 진행
총 데이터 4만개 정도 사용하였습니다.
<br>
<DPO학습 데이터>
DPO 데이터는 CommonGen과 TruthfulQA에 초점을 맞추어 약 17,000개의 데이터를 학습하였습니다.
+ ko-hh-rlhf 데이터에서 chosen 데이터부분을 ChatGPT를 통해 변경한 데이터를 추가 학습하였습니다.
<br>
+ TruthfulQA 관련 문제 추가를 진행하였습니다.(속설 관련 참거짓 문제)
+ 기계독해 관련 학습 데이터를 ChatGPT를 통해서 답변을 얻어 학습
+ 문법관련 학습 데이터
<br>
###학습 데이터 파일은 비공개입니다.
<br>
<b><학습></b>
학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다.
|
minchyeom/MemGPT-3
|
minchyeom
| 2024-01-25T07:53:01Z | 16 | 9 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"memgpt",
"function",
"function calling",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T07:42:43Z |
---
language:
- en
license: apache-2.0
tags:
- memgpt
- function
- function calling
---
This is a new and more refined version of [starsnatched/MemGPT](https://huggingface.co/starsnatched/MemGPT). I will be using DPO to further improve the performance once the dataset is ready.
# Model Description
This repo contains a 7 billion parameter Language Model fine tuned from [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2). This model is specifically designed for function calling in [MemGPT](https://memgpt.ai/). It demonstrates comparable performances to GPT-4 when it comes to working with MemGPT.
# Key Features
* Function calling
* Dedicated to working with MemGPT
* Supports medium context, trained with Sequences up to 8,192
# Usage
This model is designed to be ran on various backends, such as [oogabooga's WebUI](https://github.com/oobabooga/text-generation-webui), or llama.cpp.
To run the model on WebUI, simply `git clone` the official WebUI repository, and run the appropriate script for your operating system. More details [here](https://github.com/oobabooga/text-generation-webui?tab=readme-ov-file#how-to-install).
Once you've installed WebUI, you can then download this model at the `model` tab. Next, choose the desired model (starsnatched/MemGPT in this case), and you're good to go for the backend.
When you have WebUI or your desired backend running, you can open a terminal/powershell, and install MemGPT using `pip3 install -U pymemgpt`. Configure your MemGPT using `memgpt configure` before running MemGPT.
Use `memgpt run --no-verify` to chat with MemGPT!
# Prompt Template
This model uses ChatML template, this can be configured via `memgpt configure`.
```
<|im_start|>user
{""type"": ""system_alert"", ""message"": ""You have 50 prior messages with the user. These messages have been hidden from view due to memory constraints, but you can view them using functions."", ""time"": ""2023-10-07 11:50:03 PM PDT-0700""}<|im_end|>
<|im_start|>assistant
Bootup sequence complete. Persona activated. Testing messaging functionality. {'name': 'send_message', 'arguments': '{\n ""message"": ""More human than human is our motto.""\n}'}<|im_end|>
<|im_start|>function
{""status"": ""OK"", ""message"": null, ""time"": ""2023-10-07 11:50:03 PM PDT-0700""}<|im_end|>
<|im_start|>user
{""type"": ""login"", ""last_login"": ""7 days 8 hours ago"", ""time"": ""2023-10-07 11:50:03 PM PDT-0700""}<|im_end|>
<|im_start|>user
{""type"": ""user_message"", ""message"": ""Hey, remember that time we talked about music? What was the artist you mentioned you could get into?"", ""time"": ""2023-10-07 11:50:03 PM PDT-0700""}<|im_end|>
<|im_start|>assistant
Alright, let me search our conversation history about music to find the artist. {'name': 'conversation_search', 'arguments': '{\n ""query"": ""music"",\n ""request_heartbeat"": true,\n ""page"": 0\n}'}<|im_end|>
<|im_start|>function
{""status"": ""OK"", ""message"": ""Showing 5 of 50 results (page 0/9): [\""timestamp: 2023-10-07 11:50:03 PM PDT-0700, user - I agree. Working out is a great way to burn off steam. Do you like country music?\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, assistant - A little bit. I can get into taylor swift.\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, user - Yeah between the both of us, could probably get a country song written about our distresses...perhaps Taylor Swift could perform it.\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, assistant - That would be a chart-topper, I'm sure. Something to blast while driving with the top down, maybe even in your next restoration project.\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, assistant - This sounds fun. Do you have any other hobby?\""]"", ""time"": ""2023-10-07 11:50:08 PM PDT-0700""}<|im_end|>
<|im_start|>user
{""type"": ""heartbeat"", ""reason"": ""request_heartbeat == true"", ""time"": ""2023-10-07 11:50:08 PM PDT-0700""}<|im_end|>
<|im_start|>assistant
Ah, I found it. It was Taylor Swift that I mentioned I could get into. Do you still listen to her music, or have you found any new artists that you enjoy? {'name': 'send_message', 'arguments': '{\n ""message"": ""Ah, I found it. It was Taylor Swift that I mentioned I could get into. Do you still listen to her music, or have you found any new artists that you enjoy?""\n}'}<|im_end|>
<|im_start|>function
{""status"": ""OK"", ""message"": null, ""time"": ""2023-10-07 11:50:14 PM PDT-0700""}<|im_end|>
```
# Model Details
* Developed by: @starsnatched
* Model type: This repo contains a language model based on the transformer decoder architecture.
* Language: English
* Contact: For any questions, concerns or comments about this model, please contact me at Discord, @starsnatched.
# Training Infrastructure
* Hardware: The model in this repo was trained on 2x A100 80GB GPUs.
# Intended Use
The model is designed to be used as the base model for MemGPT agents.
# Limitations and Bias
The model may exhibit unreliable, unsafe, or biased behaviours. Please double check the results this model may produce.
|
SanjiWatsuki/TinyLlamaHerd-2x1.1B
|
SanjiWatsuki
| 2024-01-25T07:47:36Z | 80 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T02:00:53Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
---
# TinyLlamaHerd-2x1.1B
TinyLlamaHerd-2x1.1B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
## 🧩 Configuration
```yaml
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
gate_mode: random
dtype: bfloat16
experts:
- source_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
positive_prompts: [""]
- source_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
positive_prompts: [""]
```
|
pfnet/plamo-13b-instruct
|
pfnet
| 2024-01-25T07:46:09Z | 117 | 14 |
transformers
|
[
"transformers",
"safetensors",
"plamo",
"text-generation",
"conversational",
"custom_code",
"en",
"ja",
"arxiv:2302.13971",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-10-26T02:11:24Z |
---
language:
- en
- ja
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
---
# PLaMo-13B-Instruct
## Model Description
PLaMo-13B-Instruct is an instruct fine-tuned model built upon the 8192 context length version of [PLaMo-13B](https://huggingface.co/pfnet/plamo-13b) text generation model. PLaMo-13B-Instruct is fine-tuned using multiple publicly available Japanese datasets.
This model is released under the Apache License 2.0.
[PLaMo-13B-Instruct Release blog (Japanese)](https://tech.preferred.jp/ja/blog/llm-plamo-instruct/)
## Usage
Install the required libraries as follows:
```sh
>>> python -m pip install numpy sentencepiece torch transformers accelerate
```
Execute the following python code:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained(
"pfnet/plamo-13b-instruct",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"pfnet/plamo-13b-instruct",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
device_map="auto",
)
```
```python
def completion(prompt: str, max_new_tokens: int = 128) -> str:
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
generated_ids = model.generate(
inputs.input_ids,
eos_token_id=2,
pad_token_id=3,
max_new_tokens=max_new_tokens,
temperature=1,
top_p=0.95,
top_k=50,
do_sample=True,
)
return tokenizer.decode(generated_ids[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)
def generate_prompt(messages: list) -> str:
sep = "\n\n### "
prompt = [
"以下はタスクを説明する指示で、文脈を説明した入力とペアになっています。",
"要求を適切に補完するよう応答を書いてください。",
]
roles = {"instruction": "指示", "response": "応答", "input": "入力"}
for msg in messages:
prompt.append(sep + roles[msg["role"]] + ":\n" + msg["content"])
prompt.append(sep + roles["response"] + ":\n")
return "".join(prompt)
```
```python
prompt = generate_prompt([
{"role": "instruction", "content": "日本の首都はどこですか?"},
# {"role": "input", "content": "..."} ## An extra input (optional)
])
print(completion(prompt, max_new_tokens=128))
```
## Model Details
- Model size: 13B
- Trained tokens: 1.5T tokens (English: 1.32T tokens, Japanese: 0.18T tokens)
- Tokenizer: sentencepiece tokenizer trained on a subset of the pretraining datasets.
- Context length: 8192
- Developed by: Preferred Networks, Inc
- Model type: Causal decoder-only
- Language(s): Japanese and English
- License: Apache License 2.0
## Training Dataset
<!-- - [Stanford Alpaca (Japanese translation)](https://huggingface.co/datasets/fujiki/japanese_alpaca_data)-->
- [databricks-dolly-15k (Japanese translation)](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)
- [Anthropic HH-RLHF (Japanese translation, subset)](https://huggingface.co/datasets/fujiki/japanese_hh-rlhf-49k)
- [OpenAssistant Conversations Dataset (Japanese translation, oasst1)](https://huggingface.co/datasets/kunishou/oasst1-89k-ja)
- [Wikinews subset of Izumi-lab llm-japanese-dataset](https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset)
For the pretraining model, see [PLaMo-13B](https://huggingface.co/pfnet/plamo-13b).
## Bias, Risks, and Limitations
PLaMo-13B-Instruct is a new technology that carries risks with use. Testing conducted to date has been in English and Japanese, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, PLaMo-13B-Instruct-NC’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of PLaMo-13B-Instruct, developers should perform safety testing and tuning tailored to their specific applications of the model.
## How to cite
```tex
@online{PLaMoInstruct2023Introducing,
author = {Preferred Networks, Inc},
title = {PLaMo-13B-Instruct},
year = {2023},
url = {https://huggingface.co/pfnet/plamo-13b-instruct},
urldate = {2023-10-26}
}
```
## References
```tex
@article{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
|
antonovata/test_chat
|
antonovata
| 2024-01-25T07:41:51Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-01-24T11:10:59Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
motherduckdb/DuckDB-NSQL-7B-v0.1-GGUF
|
motherduckdb
| 2024-01-25T07:37:17Z | 232 | 21 | null |
[
"gguf",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-01-24T19:31:51Z |
---
license: llama2
inference:
parameters:
do_sample: false
max_length: 200
widget:
- text: "### Instruction:\nYour task is to generate valid duckdb SQL to answer the following question.\n\n### Input:\n\n### Question:\ncreate a new table called tmp from test.csv\n\n### Response (use duckdb shorthand if possible):"
example_title: "read test.csv"
- text: "### Instruction:\nYour task is to generate valid duckdb SQL to answer the following question.\n\n### Input:\n\n### Question:\ncreate a new table called tmp from test.csv\n\n### Response (use duckdb shorthand if possible):"
example_title: "get _amount columns"
- text: "### Instruction:\nYour task is to generate valid duckdb SQL to answer the following question, given a duckdb database schema.\n\n### Input:\nHere is the database schema that the SQL query will run on:\nCREATE TABLE rideshare (\n hvfhs_license_num varchar,\n dispatching_base_num varchar,\n originating_base_num varchar,\n request_datetime timestamp,\n on_scene_datetime timestamp,\n pickup_datetime timestamp,\n dropoff_datetime timestamp,\n trip_miles double,\n trip_time bigint,\n\n);\n\n### Question:\nget longest trip in december 2022\n\n### Response (use duckdb shorthand if possible):"
example_title: "taxi trips"
---
# DuckDB-NSQL-7B (GGUF)
The repository includes model files in the GGUF format for [DuckDB-NSQL-7B-v0.1](https://huggingface.co/motherduckdb/DuckDB-NSQL-7B-v0.1), featuring both the f16 and Q8_0 versions.
## Provided model files
| Name | Quant method | Bits |
| ---- | ---- | ---- |
| [DuckDB-NSQL-7B-v0.1-f16.gguf](https://huggingface.co/motherduckdb/DuckDB-NSQL-7B-v0.1-GGUF/blob/main/DuckDB-NSQL-7B-v0.1-f16.gguf) | - | 16 |
| [DuckDB-NSQL-7B-v0.1-q8_0.gguf](https://huggingface.co/motherduckdb/DuckDB-NSQL-7B-v0.1-GGUF/blob/main/DuckDB-NSQL-7B-v0.1-q8_0.gguf) | Q8_0 | 8 |
## Model Description
NSQL is a family of autoregressive open-source large foundation models (FMs) designed specifically for SQL generation tasks.
In this repository we are introducing a new member of NSQL, DuckDB-NSQL. It's based on Meta's original [Llama-2 7B model](https://huggingface.co/meta-llama/Llama-2-7b) and further pre-trained on a dataset of general SQL queries and then fine-tuned on a dataset composed of DuckDB text-to-SQL pairs.
## Training Data
200k DuckDB text-to-SQL pairs, synthetically generated using [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1), guided by the DuckDB v0.9.2 documentation. And text-to-SQL pairs from [NSText2SQL](https://huggingface.co/datasets/NumbersStation/NSText2SQL) that were transpiled to DuckDB SQL using [sqlglot](https://github.com/tobymao/sqlglot).
## Evaluation Data
We evaluate our models on a DuckDB-specific benchmark that contains 75 text-to-SQL pairs. The benchmark is available [here](https://github.com/NumbersStationAI/DuckDB-NSQL/).
## Training Procedure
DuckDB-NSQL was trained using cross-entropy loss to maximize the likelihood of sequential inputs. For finetuning on text-to-SQL pairs, we only compute the loss over the SQL portion of the pair. The model is trained using 80GB A100s, leveraging data and model parallelism. We fine-tuned for 10 epochs.
## Intended Use and Limitations
The model was designed for text-to-SQL generation tasks from given table schema and natural language prompts. The model works best with the prompt format defined below and outputs.
In contrast to existing text-to-SQL models, the SQL generation is not contrained to `SELECT` statements, but can generate any valid DuckDB SQL statement, including statements for official DuckDB extensions.
## How to Use
Setup llama.cpp:
```shell
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
huggingface-cli download motherduckdb/DuckDB-NSQL-7B-v0.1-GGUF DuckDB-NSQL-7B-v0.1-q8_0.gguf --local-dir . --local-dir-use-symlinks False
pip install wurlitzer
```
Example 1:
```python
## Setup - Llama.cpp
from llama_cpp import Llama
with pipes() as (out, err):
llama = Llama(
model_path="DuckDB-NSQL-7B-v0.1-q8_0.gguf",
n_ctx=2048,
)
text = """### Instruction:
Your task is to generate valid duckdb SQL to answer the following question.
### Input:
### Question:
create a new table called tmp from test.csv
### Response (use duckdb shorthand if possible):
"""
with pipes() as (out, err):
pred = llama(text, temperature=0.1, max_tokens=500)
print(pred["choices"][0]["text"])
```
Example 2:
```python
from llama_cpp import Llama
with pipes() as (out, err):
llama = Llama(
model_path="DuckDB-NSQL-7B-v0.1-q8_0.gguf",
n_ctx=2048,
)
text = """### Instruction:
Your task is to generate valid duckdb SQL to answer the following question, given a duckdb database schema.
### Input:
Here is the database schema that the SQL query will run on:
CREATE TABLE taxi (
VendorID bigint,
tpep_pickup_datetime timestamp,
tpep_dropoff_datetime timestamp,
passenger_count double,
trip_distance double,
fare_amount double,
extra double,
tip_amount double,
tolls_amount double,
improvement_surcharge double,
total_amount double,
);
### Question:
get all columns ending with _amount from taxi table
### Response (use duckdb shorthand if possible):"""
with pipes() as (out, err):
pred = llama(text, temperature=0.1, max_tokens=500)
print(pred["choices"][0]["text"])
```
Example 3:
```python
from llama_cpp import Llama
with pipes() as (out, err):
llama = Llama(
model_path="DuckDB-NSQL-7B-v0.1-q8_0.gguf",
n_ctx=2048,
)
text = """### Instruction:
Your task is to generate valid duckdb SQL to answer the following question, given a duckdb database schema.
### Input:
Here is the database schema that the SQL query will run on:
CREATE TABLE rideshare (
hvfhs_license_num varchar,
dispatching_base_num varchar,
originating_base_num varchar,
request_datetime timestamp,
on_scene_datetime timestamp,
pickup_datetime timestamp,
dropoff_datetime timestamp,
trip_miles double,
trip_time bigint,
);
### Question:
get longest trip in december 2022
### Response (use duckdb shorthand if possible):
"""
with pipes() as (out, err):
pred = llama(text, temperature=0.1, max_tokens=500)
print(pred["choices"][0]["text"])
```
For more information (e.g., run with your local database), please find examples in [this repository](https://github.com/NumbersStationAI/DuckDB-NSQL).
|
eigh8skillz/southzulumodel
|
eigh8skillz
| 2024-01-25T07:35:17Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-01-25T07:35:16Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a sks southzulumodel
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
ntc-ai/SDXL-LoRA-slider.2000s-indie-art-style
|
ntc-ai
| 2024-01-25T07:27:15Z | 5 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-25T07:27:12Z |
---
language:
- en
thumbnail: "images/evaluate/2000s indie art style.../2000s indie art style_17_3.0.png"
widget:
- text: 2000s indie art style
output:
url: images/2000s indie art style_17_3.0.png
- text: 2000s indie art style
output:
url: images/2000s indie art style_19_3.0.png
- text: 2000s indie art style
output:
url: images/2000s indie art style_20_3.0.png
- text: 2000s indie art style
output:
url: images/2000s indie art style_21_3.0.png
- text: 2000s indie art style
output:
url: images/2000s indie art style_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "2000s indie art style"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - 2000s indie art style (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/2000s indie art style_17_-3.0.png" width=256 height=256 /> | <img src="images/2000s indie art style_17_0.0.png" width=256 height=256 /> | <img src="images/2000s indie art style_17_3.0.png" width=256 height=256 /> |
| <img src="images/2000s indie art style_19_-3.0.png" width=256 height=256 /> | <img src="images/2000s indie art style_19_0.0.png" width=256 height=256 /> | <img src="images/2000s indie art style_19_3.0.png" width=256 height=256 /> |
| <img src="images/2000s indie art style_20_-3.0.png" width=256 height=256 /> | <img src="images/2000s indie art style_20_0.0.png" width=256 height=256 /> | <img src="images/2000s indie art style_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
2000s indie art style
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.2000s-indie-art-style', weight_name='2000s indie art style.safetensors', adapter_name="2000s indie art style")
# Activate the LoRA
pipe.set_adapters(["2000s indie art style"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, 2000s indie art style"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
HarshithNLP/bloom_3b_NER
|
HarshithNLP
| 2024-01-25T07:24:12Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/bloom-3b",
"base_model:adapter:bigscience/bloom-3b",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2024-01-25T07:24:06Z |
---
license: bigscience-bloom-rail-1.0
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/bloom-3b
model-index:
- name: bloom_3b_NER
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom_3b_NER
This model is a fine-tuned version of [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
abhinand/malayalam-llama-7b-instruct-v0.1-GGUF
|
abhinand
| 2024-01-25T07:11:35Z | 6 | 1 | null |
[
"gguf",
"en",
"ml",
"arxiv:2311.05845",
"base_model:abhinand/malayalam-llama-pt-v0",
"base_model:quantized:abhinand/malayalam-llama-pt-v0",
"license:llama2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-01-25T04:29:16Z |
---
base_model: abhinand/malayalam-llama-pt-v0
model-index:
- name: malayalam-llama-instruct-v0.1
results: []
license: llama2
language:
- en
- ml
---
# Malayalam LLaMA 7B Instruct v0.1 [GGUF Quantized]
Welcome to the inaugural release of the Malayalam LLaMA 7B instruct model – an important step in advancing LLMs for the Malayalam language. This model is ready for immediate inference and is also primed for further fine-tuning to cater to your specific NLP tasks.
To dive deep into the development and capabilities of this model, please read the [research paper](https://arxiv.org/abs/2311.05845) and the [introductory blog post (WIP)]() that outlines our journey and the model's potential impact.
> **Note:** This model is based on the Tamil LLaMA series of models. The GitHub repository remains the same - [https://github.com/abhinand5/tamil-llama](https://github.com/abhinand5/tamil-llama). The base models and the updated code for Tamil LLaMA v0.2 (which this work is based on) will be released soon.
If you appreciate this work and would like to support its continued development, consider [buying me a coffee](https://www.buymeacoffee.com/abhinand.b). Your support is invaluable and greatly appreciated.
[](https://www.buymeacoffee.com/abhinand.b)
## Demo:
To access an easy-to-use, no-code demo, please open the provided Google Colab notebook. Complete instructions for usage are included within the notebook itself.
<a href="https://colab.research.google.com/drive/11_RHZim_HubD2NskxSwq4X_NW4XlnrtS?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Demo In Colab"></a></center>
## Model description
The Malayalam LLaMA models have been enhanced and tailored specifically with an extensive Malayalam vocabulary of ~16,000 tokens, building upon the foundation set by the original LLaMA-2.
- **Model type:** A 7B parameter GPT-like model finetuned on ~500,000 samples consisting of an equal proportion of English and Malayalam samples. (Dataset will be released soon)
- **Language(s):** Bilingual. English and Malayalam.
- **License:** GNU General Public License v3.0
- **Finetuned from model:** [To be released soon]()
- **Training Precision:** `bfloat16`
- **Code:** [GitHub](https://github.com/abhinand5/tamil-llama) (To be updated soon)
## Quantization Info
This repo contains GGUF format model files for [Malayalam LLaMA 7B Instruct v0.1](https://huggingface.co/abhinand/malayalam-llama-7b-instruct-v0.1).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
### Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|------------------------------------------------------------------------------------------------------------------------------------------------|--------------|------|---------|------------------|----------------------------------------------------------|
| [malayalam-llama-7b-instruct-v0.1.Q4_K_M.gguf](https://huggingface.co/abhinand/malayalam-llama-7b-instruct-v0.1-GGUF/blob/main/malayalam-llama-7b-instruct-v0.1.Q4_K_M.gguf) | Q4_K_M | 4 | 4.18 GB | 6.8 GB | medium, balanced quality - recommended |
| [malayalam-llama-7b-instruct-v0.1.Q5_K_M.gguf](https://huggingface.co/abhinand/malayalam-llama-7b-instruct-v0.1-GGUF/blob/main/malayalam-llama-7b-instruct-v0.1.Q5_K_M.gguf) | Q5_K_M | 5 | 4.89 GB | 7.5 GB | large, very low quality loss - recommended |
## Prompt Template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Benchmark Results
Benchmarking was done using [LLM-Autoeval](https://github.com/mlabonne/llm-autoeval) on an RTX 3090 on [runpod](https://www.runpod.io/).
> **Note:** Please note that discrepancies have been observed between the Open LLM Leaderboard scores and those obtained from local runs using the LM Eval Harness with identical configurations. The results mentioned here are based on our own benchmarking. To replicate these findings, you can utilize the LLM-Autoeval or use [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) locally with the configurations described in Open LLM Leaderboard's About page.
| Benchmark | Llama 2 Chat | Tamil Llama v0.2 Instruct | Telugu Llama Instruct | Malayalam Llama Instruct |
|---------------|--------------|---------------------------|-----------------------|--------------------------|
| ARC Challenge (25-shot) | 52.9 | **53.75** | 52.47 | 52.82 |
| TruthfulQA (0-shot) | 45.57 | 47.23 | **48.47** | 47.46 |
| Hellaswag (10-shot) | **78.55** | 76.11 | 76.13 | 76.91 |
| Winogrande (5-shot) | 71.74 | **73.95** | 71.74 | 73.16 |
| AGI Eval (0-shot) | 29.3 | **30.95** | 28.44 | 29.6 |
| BigBench (0-shot) | 32.6 | 33.08 | 32.99 | **33.26** |
| Average | 51.78 | **52.51** | 51.71 | 52.2 |
## Related Models
| Model | Type | Data | Base Model | # Params | Download Links |
|--------------------------|-----------------------------|-------------------|----------------------|------|------------------------------------------------------------------------|
| Tamil LLaMA 7B v0.1 Base | Base model | 12GB | LLaMA 7B | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-base-v0.1) |
| Tamil LLaMA 13B v0.1 Base | Base model | 4GB | LLaMA 13B | 13B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-13b-base-v0.1) |
| Tamil LLaMA 7B v0.1 Instruct | Instruction following model | 145k instructions | Tamil LLaMA 7B Base | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.1) |
| Tamil LLaMA 13B v0.1 Instruct | Instruction following model | 145k instructions | Tamil LLaMA 13B Base | 13B | [HF Hub](abhinand/tamil-llama-13b-instruct-v0.1) |
| Tamil LLaMA 7B v0.2 Instruct | Instruction/Chat model | 420k instructions | Tamil LLaMA 7B Base v0.2 | 7B | [HF Hub](abhinand/tamil-llama-13b-instruct-v0.1) |
| Telugu LLaMA 7B v0.2 Instruct | Instruction/Chat model | ~400k instructions | Telugu LLaMA 7B Base v0.1 | 7B | [HF Hub](https://huggingface.co/abhinand/telugu-llama-instruct-v0.1) |
## Usage Note
It's important to note that the models have not undergone detoxification/censorship. Therefore, while they possess impressive linguistic capabilities, there is a possibility for them to generate content that could be deemed harmful or offensive. We urge users to exercise discretion and supervise the model's outputs closely, especially in public or sensitive applications.
## Meet the Developers
Get to know the creators behind this innovative model and follow their contributions to the field:
- [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/)
## Citation
If you use this model or any of the the Tamil-Llama related work in your research, please cite:
```bibtex
@misc{balachandran2023tamilllama,
title={Tamil-Llama: A New Tamil Language Model Based on Llama 2},
author={Abhinand Balachandran},
year={2023},
eprint={2311.05845},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Tamil language.
|
Deepakkori45/Mistal_aspect_class_sequnece
|
Deepakkori45
| 2024-01-25T07:11:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T07:11:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
coke0zero/ppo-LunarLander-v2
|
coke0zero
| 2024-01-25T06:58:31Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-25T06:54:46Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.46 +/- 16.51
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
e22vvb/EN_mt5-small_10_spider
|
e22vvb
| 2024-01-25T06:31:32Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-25T05:26:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: EN_mt5-small_10_spider
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EN_mt5-small_10_spider
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4350
- Rouge2 Precision: 0.3068
- Rouge2 Recall: 0.202
- Rouge2 Fmeasure: 0.2177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| No log | 1.0 | 438 | 4.5527 | 0.0024 | 0.001 | 0.0014 |
| 17.6146 | 2.0 | 876 | 1.6584 | 0.0 | 0.0 | 0.0 |
| 2.53 | 3.0 | 1314 | 1.5698 | 0.0 | 0.0 | 0.0 |
| 1.9045 | 4.0 | 1752 | 1.3153 | 0.0206 | 0.0074 | 0.0106 |
| 1.5088 | 5.0 | 2190 | 0.6748 | 0.1009 | 0.056 | 0.0623 |
| 0.8005 | 6.0 | 2628 | 0.5124 | 0.2301 | 0.1513 | 0.1593 |
| 0.5297 | 7.0 | 3066 | 0.4683 | 0.2899 | 0.1903 | 0.2021 |
| 0.4507 | 8.0 | 3504 | 0.4464 | 0.2966 | 0.2016 | 0.2129 |
| 0.4507 | 9.0 | 3942 | 0.4385 | 0.3125 | 0.2065 | 0.2224 |
| 0.4161 | 10.0 | 4380 | 0.4350 | 0.3068 | 0.202 | 0.2177 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.7.dev0
- Tokenizers 0.13.3
|
haturusinghe/BEST_2nd_0.8386089347770498_25_01-06-27_xlm-roberta-base__2e-05_16_745.ckpt
|
haturusinghe
| 2024-01-25T06:27:56Z | 173 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-25T06:27:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LMX20012010/mistral_7b_guanaco
|
LMX20012010
| 2024-01-25T06:25:14Z | 61 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-01-25T06:23:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ramsi-k/ppo-LunarLander-v2
|
ramsi-k
| 2024-01-25T06:16:00Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-25T06:15:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 243.37 +/- 22.82
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
adarsh2350/T5-pytorch-billsum
|
adarsh2350
| 2024-01-25T06:15:34Z | 90 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-24T15:52:59Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: T5-pytorch-billsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5-pytorch-billsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5259
- Rouge1: 0.1402
- Rouge2: 0.0484
- Rougel: 0.1155
- Rougelsum: 0.1153
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8082 | 0.1229 | 0.034 | 0.1036 | 0.1035 | 19.0 |
| No log | 2.0 | 124 | 2.6034 | 0.1296 | 0.0409 | 0.1079 | 0.1079 | 19.0 |
| No log | 3.0 | 186 | 2.5419 | 0.1378 | 0.047 | 0.1136 | 0.1136 | 19.0 |
| No log | 4.0 | 248 | 2.5259 | 0.1402 | 0.0484 | 0.1155 | 0.1153 | 19.0 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cpu
- Datasets 2.16.1
- Tokenizers 0.15.0
|
cloudyu/Pluto_13B_DPO
|
cloudyu
| 2024-01-25T05:44:13Z | 17 | 4 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"dpo",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-17T06:00:41Z |
---
license: cc-by-nc-4.0
tags:
- moe
- dpo
---
# DPO of cloudyu/Mixtral_7Bx2_MoE
dataset : jondurbin/truthy-dpo-v0.1
* metrics average 75.88
* TruthfulQA 81.5 SOTA (2024-01-17)
gpu code example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Mixtral_7Bx2_MoE_DPO"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.bfloat16, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.