modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-09 06:31:45
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 550
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-09 06:31:30
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
google/flan-t5-large
|
google
| 2023-07-17T12:49:05Z | 2,292,533 | 680 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:svakulenk0/qrecc",
"dataset:taskmaster2",
"dataset:djaym7/wiki_dialog",
"dataset:deepmind/code_contests",
"dataset:lambada",
"dataset:gsm8k",
"dataset:aqua_rat",
"dataset:esnli",
"dataset:quasc",
"dataset:qed",
"arxiv:2210.11416",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-21T10:07:08Z |
---
language:
- en
- fr
- ro
- de
- multilingual
widget:
- text: "Translate to German: My name is Arthur"
example_title: "Translation"
- text: "Please answer to the following question. Who is going to be the next Ballon d'or?"
example_title: "Question Answering"
- text: "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering."
example_title: "Logical reasoning"
- text: "Please answer the following question. What is the boiling point of Nitrogen?"
example_title: "Scientific knowledge"
- text: "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?"
example_title: "Yes/no question"
- text: "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?"
example_title: "Reasoning task"
- text: "Q: ( False or not False or False ) is? A: Let's think step by step"
example_title: "Boolean Expressions"
- text: "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?"
example_title: "Math reasoning"
- text: "Premise: At my age you will probably have learnt one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?"
example_title: "Premise and hypothesis"
tags:
- text2text-generation
datasets:
- svakulenk0/qrecc
- taskmaster2
- djaym7/wiki_dialog
- deepmind/code_contests
- lambada
- gsm8k
- aqua_rat
- esnli
- quasc
- qed
license: apache-2.0
---
# Model Card for FLAN-T5 large
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/flan2_architecture.jpg"
alt="drawing" width="600"/>
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
5. [Training Details](#training-details)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
# TL;DR
If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages.
As mentioned in the first few lines of the abstract :
> Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
**Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large).
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English, Spanish, Japanese, Persian, Hindi, French, Chinese, Bengali, Gujarati, German, Telugu, Italian, Arabic, Polish, Tamil, Marathi, Malayalam, Oriya, Panjabi, Portuguese, Urdu, Galician, Hebrew, Korean, Catalan, Thai, Dutch, Indonesian, Vietnamese, Bulgarian, Filipino, Central Khmer, Lao, Turkish, Russian, Croatian, Swedish, Yoruba, Kurdish, Burmese, Malay, Czech, Finnish, Somali, Tagalog, Swahili, Sinhala, Kannada, Zhuang, Igbo, Xhosa, Romanian, Haitian, Estonian, Slovak, Lithuanian, Greek, Nepali, Assamese, Norwegian
- **License:** Apache 2.0
- **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=flan-t5)
- **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints)
- **Resources for more information:**
- [Research paper](https://arxiv.org/pdf/2210.11416.pdf)
- [GitHub Repo](https://github.com/google-research/t5x)
- [Hugging Face FLAN-T5 Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/t5)
# Usage
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto", torch_dtype=torch.float16)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-large")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto", load_in_8bit=True)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
# Uses
## Direct Use and Downstream Use
The authors write in [the original paper's model card](https://arxiv.org/pdf/2210.11416.pdf) that:
> The primary use is research on language models, including: research on zero-shot NLP tasks and in-context few-shot learning NLP tasks, such as reasoning, and question answering; advancing fairness and safety research, and understanding limitations of current large language models
See the [research paper](https://arxiv.org/pdf/2210.11416.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
The information below in this section are copied from the model's [official model card](https://arxiv.org/pdf/2210.11416.pdf):
> Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
## Ethical considerations and risks
> Flan-T5 is fine-tuned on a large corpus of text data that was not filtered for explicit content or assessed for existing biases. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
## Known Limitations
> Flan-T5 has not been tested in real world applications.
## Sensitive Use:
> Flan-T5 should not be applied for any unacceptable use cases, e.g., generation of abusive speech.
# Training Details
## Training Data
The model was trained on a mixture of tasks, that includes the tasks described in the table below (from the original paper, figure 2):

## Training Procedure
According to the model card from the [original paper](https://arxiv.org/pdf/2210.11416.pdf):
> These models are based on pretrained T5 (Raffel et al., 2020) and fine-tuned with instructions for better zero-shot and few-shot performance. There is one fine-tuned Flan model per T5 model size.
The model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax).
# Evaluation
## Testing Data, Factors & Metrics
The authors evaluated the model on various tasks covering several languages (1836 in total). See the table below for some quantitative evaluation:

For full details, please check the [research paper](https://arxiv.org/pdf/2210.11416.pdf).
## Results
For full results for FLAN-T5-Large, see the [research paper](https://arxiv.org/pdf/2210.11416.pdf), Table 3.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4.
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@misc{https://doi.org/10.48550/arxiv.2210.11416,
doi = {10.48550/ARXIV.2210.11416},
url = {https://arxiv.org/abs/2210.11416},
author = {Chung, Hyung Won and Hou, Le and Longpre, Shayne and Zoph, Barret and Tay, Yi and Fedus, William and Li, Eric and Wang, Xuezhi and Dehghani, Mostafa and Brahma, Siddhartha and Webson, Albert and Gu, Shixiang Shane and Dai, Zhuyun and Suzgun, Mirac and Chen, Xinyun and Chowdhery, Aakanksha and Narang, Sharan and Mishra, Gaurav and Yu, Adams and Zhao, Vincent and Huang, Yanping and Dai, Andrew and Yu, Hongkun and Petrov, Slav and Chi, Ed H. and Dean, Jeff and Devlin, Jacob and Roberts, Adam and Zhou, Denny and Le, Quoc V. and Wei, Jason},
keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Scaling Instruction-Finetuned Language Models},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
naimul011/fine_tuned_llama-7b-100-hf
|
naimul011
| 2023-07-17T12:48:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-16T10:47:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
google/flan-t5-base
|
google
| 2023-07-17T12:48:39Z | 804,134 | 836 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"en",
"fr",
"ro",
"de",
"multilingual",
"dataset:svakulenk0/qrecc",
"dataset:taskmaster2",
"dataset:djaym7/wiki_dialog",
"dataset:deepmind/code_contests",
"dataset:lambada",
"dataset:gsm8k",
"dataset:aqua_rat",
"dataset:esnli",
"dataset:quasc",
"dataset:qed",
"arxiv:2210.11416",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-21T10:02:31Z |
---
language:
- en
- fr
- ro
- de
- multilingual
tags:
- text2text-generation
widget:
- text: "Translate to German: My name is Arthur"
example_title: "Translation"
- text: "Please answer to the following question. Who is going to be the next Ballon d'or?"
example_title: "Question Answering"
- text: "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering."
example_title: "Logical reasoning"
- text: "Please answer the following question. What is the boiling point of Nitrogen?"
example_title: "Scientific knowledge"
- text: "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?"
example_title: "Yes/no question"
- text: "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?"
example_title: "Reasoning task"
- text: "Q: ( False or not False or False ) is? A: Let's think step by step"
example_title: "Boolean Expressions"
- text: "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?"
example_title: "Math reasoning"
- text: "Premise: At my age you will probably have learnt one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?"
example_title: "Premise and hypothesis"
datasets:
- svakulenk0/qrecc
- taskmaster2
- djaym7/wiki_dialog
- deepmind/code_contests
- lambada
- gsm8k
- aqua_rat
- esnli
- quasc
- qed
license: apache-2.0
---
# Model Card for FLAN-T5 base
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/flan2_architecture.jpg"
alt="drawing" width="600"/>
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
5. [Training Details](#training-details)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
# TL;DR
If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages.
As mentioned in the first few lines of the abstract :
> Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
**Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large).
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English, Spanish, Japanese, Persian, Hindi, French, Chinese, Bengali, Gujarati, German, Telugu, Italian, Arabic, Polish, Tamil, Marathi, Malayalam, Oriya, Panjabi, Portuguese, Urdu, Galician, Hebrew, Korean, Catalan, Thai, Dutch, Indonesian, Vietnamese, Bulgarian, Filipino, Central Khmer, Lao, Turkish, Russian, Croatian, Swedish, Yoruba, Kurdish, Burmese, Malay, Czech, Finnish, Somali, Tagalog, Swahili, Sinhala, Kannada, Zhuang, Igbo, Xhosa, Romanian, Haitian, Estonian, Slovak, Lithuanian, Greek, Nepali, Assamese, Norwegian
- **License:** Apache 2.0
- **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=flan-t5)
- **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints)
- **Resources for more information:**
- [Research paper](https://arxiv.org/pdf/2210.11416.pdf)
- [GitHub Repo](https://github.com/google-research/t5x)
- [Hugging Face FLAN-T5 Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/t5)
# Usage
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-base")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-base")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-base")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-base", device_map="auto")
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-base")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-base", device_map="auto", torch_dtype=torch.float16)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-base")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-base", device_map="auto", load_in_8bit=True)
input_text = "translate English to German: How old are you?"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
# Uses
## Direct Use and Downstream Use
The authors write in [the original paper's model card](https://arxiv.org/pdf/2210.11416.pdf) that:
> The primary use is research on language models, including: research on zero-shot NLP tasks and in-context few-shot learning NLP tasks, such as reasoning, and question answering; advancing fairness and safety research, and understanding limitations of current large language models
See the [research paper](https://arxiv.org/pdf/2210.11416.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
The information below in this section are copied from the model's [official model card](https://arxiv.org/pdf/2210.11416.pdf):
> Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
## Ethical considerations and risks
> Flan-T5 is fine-tuned on a large corpus of text data that was not filtered for explicit content or assessed for existing biases. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
## Known Limitations
> Flan-T5 has not been tested in real world applications.
## Sensitive Use:
> Flan-T5 should not be applied for any unacceptable use cases, e.g., generation of abusive speech.
# Training Details
## Training Data
The model was trained on a mixture of tasks, that includes the tasks described in the table below (from the original paper, figure 2):

## Training Procedure
According to the model card from the [original paper](https://arxiv.org/pdf/2210.11416.pdf):
> These models are based on pretrained T5 (Raffel et al., 2020) and fine-tuned with instructions for better zero-shot and few-shot performance. There is one fine-tuned Flan model per T5 model size.
The model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax).
# Evaluation
## Testing Data, Factors & Metrics
The authors evaluated the model on various tasks covering several languages (1836 in total). See the table below for some quantitative evaluation:

For full details, please check the [research paper](https://arxiv.org/pdf/2210.11416.pdf).
## Results
For full results for FLAN-T5-Base, see the [research paper](https://arxiv.org/pdf/2210.11416.pdf), Table 3.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4.
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@misc{https://doi.org/10.48550/arxiv.2210.11416,
doi = {10.48550/ARXIV.2210.11416},
url = {https://arxiv.org/abs/2210.11416},
author = {Chung, Hyung Won and Hou, Le and Longpre, Shayne and Zoph, Barret and Tay, Yi and Fedus, William and Li, Eric and Wang, Xuezhi and Dehghani, Mostafa and Brahma, Siddhartha and Webson, Albert and Gu, Shixiang Shane and Dai, Zhuyun and Suzgun, Mirac and Chen, Xinyun and Chowdhery, Aakanksha and Narang, Sharan and Mishra, Gaurav and Yu, Adams and Zhao, Vincent and Huang, Yanping and Dai, Andrew and Yu, Hongkun and Petrov, Slav and Chi, Ed H. and Dean, Jeff and Devlin, Jacob and Roberts, Adam and Zhou, Denny and Le, Quoc V. and Wei, Jason},
keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Scaling Instruction-Finetuned Language Models},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
## Model Recycling
[Evaluation on 36 datasets](https://ibm.github.io/model-recycling/model_gain_chart?avg=9.16&mnli_lp=nan&20_newsgroup=3.34&ag_news=1.49&amazon_reviews_multi=0.21&anli=13.91&boolq=16.75&cb=23.12&cola=9.97&copa=34.50&dbpedia=6.90&esnli=5.37&financial_phrasebank=18.66&imdb=0.33&isear=1.37&mnli=11.74&mrpc=16.63&multirc=6.24&poem_sentiment=14.62&qnli=3.41&qqp=6.18&rotten_tomatoes=2.98&rte=24.26&sst2=0.67&sst_5bins=5.44&stsb=20.68&trec_coarse=3.95&trec_fine=10.73&tweet_ev_emoji=13.39&tweet_ev_emotion=4.62&tweet_ev_hate=3.46&tweet_ev_irony=9.04&tweet_ev_offensive=1.69&tweet_ev_sentiment=0.75&wic=14.22&wnli=9.44&wsc=5.53&yahoo_answers=4.14&model_name=google%2Fflan-t5-base&base_name=google%2Ft5-v1_1-base) using google/flan-t5-base as a base model yields average score of 77.98 in comparison to 68.82 by google/t5-v1_1-base.
The model is ranked 1st among all tested models for the google/t5-v1_1-base architecture as of 06/02/2023
Results:
| 20_newsgroup | ag_news | amazon_reviews_multi | anli | boolq | cb | cola | copa | dbpedia | esnli | financial_phrasebank | imdb | isear | mnli | mrpc | multirc | poem_sentiment | qnli | qqp | rotten_tomatoes | rte | sst2 | sst_5bins | stsb | trec_coarse | trec_fine | tweet_ev_emoji | tweet_ev_emotion | tweet_ev_hate | tweet_ev_irony | tweet_ev_offensive | tweet_ev_sentiment | wic | wnli | wsc | yahoo_answers |
|---------------:|----------:|-----------------------:|--------:|--------:|--------:|--------:|-------:|----------:|--------:|-----------------------:|-------:|--------:|--------:|--------:|----------:|-----------------:|--------:|--------:|------------------:|--------:|--------:|------------:|--------:|--------------:|------------:|-----------------:|-------------------:|----------------:|-----------------:|---------------------:|---------------------:|--------:|-------:|--------:|----------------:|
| 86.2188 | 89.6667 | 67.12 | 51.9688 | 82.3242 | 78.5714 | 80.1534 | 75 | 77.6667 | 90.9507 | 85.4 | 93.324 | 72.425 | 87.2457 | 89.4608 | 62.3762 | 82.6923 | 92.7878 | 89.7724 | 89.0244 | 84.8375 | 94.3807 | 57.2851 | 89.4759 | 97.2 | 92.8 | 46.848 | 80.2252 | 54.9832 | 76.6582 | 84.3023 | 70.6366 | 70.0627 | 56.338 | 53.8462 | 73.4 |
For more information, see: [Model Recycling](https://ibm.github.io/model-recycling/)
|
SojiLee/modelka-icons-style
|
SojiLee
| 2023-07-17T12:30:20Z | 25 | 2 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-17T12:28:41Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: outlidfkaskdn
---
### Modelka_icons_style Dreambooth model trained by SojiLee with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v2-1-512 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
outlidfkaskdn (use that on your prompt)

|
ShekDass/donut-base-sroie-cord
|
ShekDass
| 2023-07-17T12:16:05Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-07-17T12:11:18Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie-cord
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie-cord
This model is a fine-tuned version of [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
peterdamn/distil-ast-audioset-finetuned-gtzan
|
peterdamn
| 2023-07-17T12:05:44Z | 173 | 0 |
transformers
|
[
"transformers",
"pytorch",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-17T08:29:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distil-ast-audioset-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil-ast-audioset-finetuned-gtzan
This model is a fine-tuned version of [bookbot/distil-ast-audioset](https://huggingface.co/bookbot/distil-ast-audioset) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5033
- Accuracy: 0.89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7719 | 1.0 | 112 | 1.0881 | 0.65 |
| 0.3801 | 2.0 | 225 | 0.8942 | 0.7 |
| 0.3706 | 3.0 | 337 | 0.9499 | 0.75 |
| 0.3541 | 4.0 | 450 | 0.5243 | 0.87 |
| 0.0132 | 5.0 | 562 | 0.5716 | 0.81 |
| 0.0221 | 6.0 | 675 | 0.5164 | 0.87 |
| 0.0001 | 7.0 | 787 | 0.4789 | 0.91 |
| 0.0002 | 8.0 | 900 | 0.5062 | 0.87 |
| 0.0528 | 9.0 | 1012 | 0.5029 | 0.89 |
| 0.0002 | 9.96 | 1120 | 0.5033 | 0.89 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
yacine-djm/fg-bert-sustainability-15-1.5e-05-0.02-64
|
yacine-djm
| 2023-07-17T12:05:07Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-17T11:16:50Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: fg-bert-sustainability-15-1.5e-05-0.02-64
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fg-bert-sustainability-15-1.5e-05-0.02-64
This model is a fine-tuned version of [Raccourci/fairguest-bert](https://huggingface.co/Raccourci/fairguest-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0711
- F1: 0.9215
- Roc Auc: 0.9565
- Accuracy: 0.8846
On the validation dataset :
- The accuracy with hamming loss is 0.7800788954635107
- The acccuracy as a metric is 0.8326530612244898
- The following is the global precision score: 0.8695652173913043
- The following is the global recall score: 0.8536585365853658
- The following is the global f1-score: 0.8615384615384616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 55 | 0.3273 | 0.0 | 0.5 | 0.0956 |
| No log | 2.0 | 110 | 0.2344 | 0.3710 | 0.6182 | 0.2328 |
| No log | 3.0 | 165 | 0.1464 | 0.8973 | 0.9300 | 0.8441 |
| No log | 4.0 | 220 | 0.1143 | 0.9066 | 0.9405 | 0.8617 |
| No log | 5.0 | 275 | 0.0998 | 0.9091 | 0.9455 | 0.8659 |
| No log | 6.0 | 330 | 0.0901 | 0.9142 | 0.9490 | 0.8732 |
| No log | 7.0 | 385 | 0.0854 | 0.9121 | 0.9534 | 0.8721 |
| No log | 8.0 | 440 | 0.0778 | 0.9185 | 0.9538 | 0.8825 |
| No log | 9.0 | 495 | 0.0775 | 0.9119 | 0.9473 | 0.8763 |
| 0.1683 | 10.0 | 550 | 0.0742 | 0.9200 | 0.9535 | 0.8815 |
| 0.1683 | 11.0 | 605 | 0.0730 | 0.9196 | 0.9544 | 0.8805 |
| 0.1683 | 12.0 | 660 | 0.0716 | 0.9213 | 0.9556 | 0.8825 |
| 0.1683 | 13.0 | 715 | 0.0722 | 0.9218 | 0.9585 | 0.8836 |
| 0.1683 | 14.0 | 770 | 0.0712 | 0.9222 | 0.9580 | 0.8836 |
| 0.1683 | 15.0 | 825 | 0.0711 | 0.9215 | 0.9565 | 0.8846 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
gouse-73/ppo-LunarLander-v2
|
gouse-73
| 2023-07-17T12:04:08Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T12:03:50Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.49 +/- 14.09
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
samarthum/model
|
samarthum
| 2023-07-17T11:40:49Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-17T10:57:31Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - samarthum/model
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
nadle/xlm-roberta-base-finetuned-panx-de
|
nadle
| 2023-07-17T11:40:06Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-17T11:27:00Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.7478932584269663
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2258
- F1: 0.7479
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4393 | 1.0 | 125 | 0.2258 | 0.7479 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Wyzard1004/TaxiV3
|
Wyzard1004
| 2023-07-17T11:35:23Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T11:35:21Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: TaxiV3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Wyzard1004/TaxiV3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
abhinavkashyap92/distilhubert-finetuned-gtzan
|
abhinavkashyap92
| 2023-07-17T11:19:37Z | 172 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-07T09:09:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6995
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7415 | 1.0 | 113 | 1.8323 | 0.43 |
| 1.2237 | 2.0 | 226 | 1.2223 | 0.65 |
| 0.8856 | 3.0 | 339 | 0.8612 | 0.71 |
| 0.658 | 4.0 | 452 | 0.6679 | 0.8 |
| 0.2701 | 5.0 | 565 | 0.5787 | 0.81 |
| 0.1232 | 6.0 | 678 | 0.7164 | 0.81 |
| 0.0726 | 7.0 | 791 | 0.6973 | 0.84 |
| 0.0253 | 8.0 | 904 | 0.6665 | 0.86 |
| 0.0939 | 9.0 | 1017 | 0.6756 | 0.87 |
| 0.0112 | 10.0 | 1130 | 0.6995 | 0.87 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/cbt-rarity-all-end-p8k-guten-rarity-all-mixed
|
NasimB
| 2023-07-17T11:13:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T09:15:48Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-rarity-all-end-p8k-guten-rarity-all-mixed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-rarity-all-end-p8k-guten-rarity-all-mixed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6958 | 0.29 | 500 | 5.6331 |
| 5.3364 | 0.58 | 1000 | 5.2041 |
| 4.9968 | 0.88 | 1500 | 4.9505 |
| 4.7186 | 1.17 | 2000 | 4.8044 |
| 4.5561 | 1.46 | 2500 | 4.6841 |
| 4.4622 | 1.75 | 3000 | 4.5747 |
| 4.3263 | 2.04 | 3500 | 4.4949 |
| 4.1311 | 2.33 | 4000 | 4.4481 |
| 4.101 | 2.63 | 4500 | 4.3896 |
| 4.0645 | 2.92 | 5000 | 4.3353 |
| 3.871 | 3.21 | 5500 | 4.3306 |
| 3.8006 | 3.5 | 6000 | 4.3048 |
| 3.7879 | 3.79 | 6500 | 4.2723 |
| 3.6977 | 4.08 | 7000 | 4.2640 |
| 3.5167 | 4.38 | 7500 | 4.2617 |
| 3.5203 | 4.67 | 8000 | 4.2466 |
| 3.5051 | 4.96 | 8500 | 4.2353 |
| 3.3506 | 5.25 | 9000 | 4.2461 |
| 3.3237 | 5.54 | 9500 | 4.2458 |
| 3.3231 | 5.83 | 10000 | 4.2450 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
navyatiwari11/my-pet-cat-nxt
|
navyatiwari11
| 2023-07-17T11:10:54Z | 5 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-17T11:04:50Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat-nxt Dreambooth model trained by navyatiwari11 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: OPJU100
Sample pictures of this concept:

|
u2003158/saved_model
|
u2003158
| 2023-07-17T11:10:43Z | 15 | 0 |
keras
|
[
"keras",
"tf-keras",
"resnet",
"code",
"image-classification",
"arxiv:1910.09700",
"region:us"
] |
image-classification
| 2023-07-17T09:48:04Z |
---
metrics:
- accuracy
library_name: keras
pipeline_tag: image-classification
tags:
- code
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model card aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** .pb
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** BugSenseAI
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chayanbhansali/clock-tower
|
chayanbhansali
| 2023-07-17T11:07:56Z | 10 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-17T11:03:06Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### clock_tower Dreambooth model trained by chayanbhansali with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
arick6/ppo-LunarLander-v2
|
arick6
| 2023-07-17T11:03:13Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-16T11:29:14Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.27 +/- 11.83
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yacine-djm/fg-bert-sustainability-15-1e-05-0.02-64
|
yacine-djm
| 2023-07-17T11:02:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-17T10:12:45Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: fg-bert-sustainability-15-1e-05-0.02-64
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fg-bert-sustainability-15-1e-05-0.02-64
This model is a fine-tuned version of [Raccourci/fairguest-bert](https://huggingface.co/Raccourci/fairguest-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0893
- F1: 0.9139
- Roc Auc: 0.9527
- Accuracy: 0.8711
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 55 | 0.3449 | 0.0 | 0.4999 | 0.0946 |
| No log | 2.0 | 110 | 0.3249 | 0.0 | 0.4999 | 0.0946 |
| No log | 3.0 | 165 | 0.2658 | 0.0755 | 0.5195 | 0.1320 |
| No log | 4.0 | 220 | 0.2092 | 0.4475 | 0.6489 | 0.3077 |
| No log | 5.0 | 275 | 0.1706 | 0.7755 | 0.8312 | 0.6663 |
| No log | 6.0 | 330 | 0.1461 | 0.8566 | 0.8998 | 0.7848 |
| No log | 7.0 | 385 | 0.1290 | 0.8929 | 0.9416 | 0.8430 |
| No log | 8.0 | 440 | 0.1161 | 0.9044 | 0.9463 | 0.8649 |
| No log | 9.0 | 495 | 0.1038 | 0.9111 | 0.9505 | 0.8680 |
| 0.2414 | 10.0 | 550 | 0.0993 | 0.9143 | 0.9523 | 0.8711 |
| 0.2414 | 11.0 | 605 | 0.0957 | 0.9106 | 0.9504 | 0.8669 |
| 0.2414 | 12.0 | 660 | 0.0932 | 0.9123 | 0.9516 | 0.8680 |
| 0.2414 | 13.0 | 715 | 0.0910 | 0.9185 | 0.9561 | 0.8784 |
| 0.2414 | 14.0 | 770 | 0.0901 | 0.9151 | 0.9538 | 0.8742 |
| 0.2414 | 15.0 | 825 | 0.0893 | 0.9139 | 0.9527 | 0.8711 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
naltatis/distilbert-base-uncased-finetuned-squad
|
naltatis
| 2023-07-17T10:59:14Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-17T09:13:27Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: naltatis/distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# naltatis/distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0002
- Train End Logits Accuracy: 0.7231
- Train Start Logits Accuracy: 0.6883
- Validation Loss: 1.1339
- Validation End Logits Accuracy: 0.6926
- Validation Start Logits Accuracy: 0.6580
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.5428 | 0.5983 | 0.5604 | 1.1748 | 0.6817 | 0.6417 | 0 |
| 1.0002 | 0.7231 | 0.6883 | 1.1339 | 0.6926 | 0.6580 | 1 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.13.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ajaycompete143/ppo-Huggy
|
ajaycompete143
| 2023-07-17T10:48:41Z | 25 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-17T10:48:36Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ajaycompete143/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
roa7n/gpt2-human_nontata_promoters-rng
|
roa7n
| 2023-07-17T10:39:18Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T10:39:16Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
FrancescoBonzi/whisper-small-finetuned-gtzan
|
FrancescoBonzi
| 2023-07-17T10:38:04Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-17T09:47:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: whisper-small-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-finetuned-gtzan
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4130
- Accuracy: 0.92
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3174 | 1.0 | 45 | 1.1768 | 0.61 |
| 0.687 | 2.0 | 90 | 0.7042 | 0.8 |
| 0.4524 | 3.0 | 135 | 0.4748 | 0.85 |
| 0.197 | 4.0 | 180 | 0.4230 | 0.89 |
| 0.2199 | 5.0 | 225 | 0.4980 | 0.88 |
| 0.113 | 6.0 | 270 | 0.3381 | 0.91 |
| 0.0054 | 7.0 | 315 | 0.3697 | 0.92 |
| 0.004 | 8.0 | 360 | 0.2930 | 0.94 |
| 0.0632 | 9.0 | 405 | 0.4574 | 0.92 |
| 0.0029 | 10.0 | 450 | 0.4130 | 0.92 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
avichr/hebEMO_anticipation
|
avichr
| 2023-07-17T10:12:57Z | 193 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# HebEMO - Emotion Recognition Model for Modern Hebrew
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated.
HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification.
Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language.
## Emotion UGC Data Description
Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences.
~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust.
The percentage of sentences in which each emotion appeared is found in the table below.
| | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment |
|------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------|
| **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 |
## Performance
### Emotion Recognition
| emotion | f1-score | precision | recall |
|-------------|----------|-----------|----------|
| anger | 0.96 | 0.99 | 0.93 |
| disgust | 0.97 | 0.98 | 0.96 |
|anticipation | 0.82 | 0.80 | 0.87 |
| fear | 0.79 | 0.88 | 0.72 |
| joy | 0.90 | 0.97 | 0.84 |
| sadness | 0.90 | 0.86 | 0.94 |
| surprise | 0.40 | 0.44 | 0.37 |
| trust | 0.83 | 0.86 | 0.80 |
*The above metrics is for positive class (meaning, the emotion is reflected in the text).*
### Sentiment (Polarity) Analysis
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| neutral | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
*Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)*
## How to use
### Emotion Recognition Model
An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing)
```
# !pip install pyplutchik==0.0.7
# !pip install transformers==4.14.1
!git clone https://github.com/avichaychriqui/HeBERT.git
from HeBERT.src.HebEMO import *
HebEMO_model = HebEMO()
HebEMO_model.hebemo(input_path = 'data/text_example.txt')
# return analyzed pandas.DataFrame
hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True)
```
<img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" />
### For sentiment classification model (polarity ONLY):
from transformers import AutoTokenizer, AutoModel, pipeline
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
# how to use?
sentiment_analysis = pipeline(
"sentiment-analysis",
model="avichr/heBERT_sentiment_analysis",
tokenizer="avichr/heBERT_sentiment_analysis",
return_all_scores = True
)
sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')
>>> [[{'label': 'neutral', 'score': 0.9978172183036804},
>>> {'label': 'positive', 'score': 0.0014792329166084528},
>>> {'label': 'negative', 'score': 0.0007035882445052266}]]
sentiment_analysis('קפה זה טעים')
>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},
>>> {'label': 'possitive', 'score': 0.9994067549705505},
>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
sentiment_analysis('אני לא אוהב את העולם')
>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05},
>>> {'label': 'possitive', 'score': 8.876807987689972e-05},
>>> {'label': 'negetive', 'score': 0.9998190999031067}]]
## Contact us
[Avichay Chriqui](mailto:avichayc@mail.tau.ac.il) <br>
[Inbal yahav](mailto:inbalyahav@tauex.tau.ac.il) <br>
The Coller Semitic Languages AI Lab <br>
Thank you, תודה, شكرا <br>
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={INFORMS Journal on Data Science},
year={2022}
}
```
|
avichr/hebEMO_anger
|
avichr
| 2023-07-17T10:12:24Z | 255 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# HebEMO - Emotion Recognition Model for Modern Hebrew
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated.
HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification.
Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language.
## Emotion UGC Data Description
Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences.
~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust.
The percentage of sentences in which each emotion appeared is found in the table below.
| | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment |
|------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------|
| **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 |
## Performance
### Emotion Recognition
| emotion | f1-score | precision | recall |
|-------------|----------|-----------|----------|
| anger | 0.96 | 0.99 | 0.93 |
| disgust | 0.97 | 0.98 | 0.96 |
|anticipation | 0.82 | 0.80 | 0.87 |
| fear | 0.79 | 0.88 | 0.72 |
| joy | 0.90 | 0.97 | 0.84 |
| sadness | 0.90 | 0.86 | 0.94 |
| surprise | 0.40 | 0.44 | 0.37 |
| trust | 0.83 | 0.86 | 0.80 |
*The above metrics is for positive class (meaning, the emotion is reflected in the text).*
### Sentiment (Polarity) Analysis
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| neutral | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
*Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)*
## How to use
### Emotion Recognition Model
An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing)
```
# !pip install pyplutchik==0.0.7
# !pip install transformers==4.14.1
!git clone https://github.com/avichaychriqui/HeBERT.git
from HeBERT.src.HebEMO import *
HebEMO_model = HebEMO()
HebEMO_model.hebemo(input_path = 'data/text_example.txt')
# return analyzed pandas.DataFrame
hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True)
```
<img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" />
### For sentiment classification model (polarity ONLY):
from transformers import AutoTokenizer, AutoModel, pipeline
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
# how to use?
sentiment_analysis = pipeline(
"sentiment-analysis",
model="avichr/heBERT_sentiment_analysis",
tokenizer="avichr/heBERT_sentiment_analysis",
return_all_scores = True
)
sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')
>>> [[{'label': 'neutral', 'score': 0.9978172183036804},
>>> {'label': 'positive', 'score': 0.0014792329166084528},
>>> {'label': 'negative', 'score': 0.0007035882445052266}]]
sentiment_analysis('קפה זה טעים')
>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},
>>> {'label': 'possitive', 'score': 0.9994067549705505},
>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
sentiment_analysis('אני לא אוהב את העולם')
>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05},
>>> {'label': 'possitive', 'score': 8.876807987689972e-05},
>>> {'label': 'negetive', 'score': 0.9998190999031067}]]
## Contact us
[Avichay Chriqui](mailto:avichayc@mail.tau.ac.il) <br>
[Inbal yahav](mailto:inbalyahav@tauex.tau.ac.il) <br>
The Coller Semitic Languages AI Lab <br>
Thank you, תודה, شكرا <br>
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={INFORMS Journal on Data Science},
year={2022}
}
```
|
avichr/hebEMO_surprise
|
avichr
| 2023-07-17T10:12:14Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# HebEMO - Emotion Recognition Model for Modern Hebrew
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated.
HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification.
Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language.
## Emotion UGC Data Description
Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences.
~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust.
The percentage of sentences in which each emotion appeared is found in the table below.
| | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment |
|------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------|
| **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 |
## Performance
### Emotion Recognition
| emotion | f1-score | precision | recall |
|-------------|----------|-----------|----------|
| anger | 0.96 | 0.99 | 0.93 |
| disgust | 0.97 | 0.98 | 0.96 |
|anticipation | 0.82 | 0.80 | 0.87 |
| fear | 0.79 | 0.88 | 0.72 |
| joy | 0.90 | 0.97 | 0.84 |
| sadness | 0.90 | 0.86 | 0.94 |
| surprise | 0.40 | 0.44 | 0.37 |
| trust | 0.83 | 0.86 | 0.80 |
*The above metrics is for positive class (meaning, the emotion is reflected in the text).*
### Sentiment (Polarity) Analysis
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| neutral | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
*Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)*
## How to use
### Emotion Recognition Model
An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing)
```
# !pip install pyplutchik==0.0.7
# !pip install transformers==4.14.1
!git clone https://github.com/avichaychriqui/HeBERT.git
from HeBERT.src.HebEMO import *
HebEMO_model = HebEMO()
HebEMO_model.hebemo(input_path = 'data/text_example.txt')
# return analyzed pandas.DataFrame
hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True)
```
<img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" />
### For sentiment classification model (polarity ONLY):
from transformers import AutoTokenizer, AutoModel, pipeline
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
# how to use?
sentiment_analysis = pipeline(
"sentiment-analysis",
model="avichr/heBERT_sentiment_analysis",
tokenizer="avichr/heBERT_sentiment_analysis",
return_all_scores = True
)
sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')
>>> [[{'label': 'neutral', 'score': 0.9978172183036804},
>>> {'label': 'positive', 'score': 0.0014792329166084528},
>>> {'label': 'negative', 'score': 0.0007035882445052266}]]
sentiment_analysis('קפה זה טעים')
>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},
>>> {'label': 'possitive', 'score': 0.9994067549705505},
>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
sentiment_analysis('אני לא אוהב את העולם')
>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05},
>>> {'label': 'possitive', 'score': 8.876807987689972e-05},
>>> {'label': 'negetive', 'score': 0.9998190999031067}]]
## Contact us
[Avichay Chriqui](mailto:avichayc@mail.tau.ac.il) <br>
[Inbal yahav](mailto:inbalyahav@tauex.tau.ac.il) <br>
The Coller Semitic Languages AI Lab <br>
Thank you, תודה, شكرا <br>
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={INFORMS Journal on Data Science},
year={2022}
}
```
|
avichr/hebEMO_trust
|
avichr
| 2023-07-17T10:11:17Z | 189 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# HebEMO - Emotion Recognition Model for Modern Hebrew
<img align="right" src="https://github.com/avichaychriqui/HeBERT/blob/main/data/heBERT_logo.png?raw=true" width="250">
HebEMO is a tool that detects polarity and extracts emotions from modern Hebrew User-Generated Content (UGC), which was trained on a unique Covid-19 related dataset that we collected and annotated.
HebEMO yielded a high performance of weighted average F1-score = 0.96 for polarity classification.
Emotion detection reached an F1-score of 0.78-0.97, with the exception of *surprise*, which the model failed to capture (F1 = 0.41). These results are better than the best-reported performance, even when compared to the English language.
## Emotion UGC Data Description
Our UGC data includes comments posted on news articles collected from 3 major Israeli news sites, between January 2020 to August 2020. The total size of the data is ~150 MB, including over 7 million words and 350K sentences.
~2000 sentences were annotated by crowd members (3-10 annotators per sentence) for overall sentiment (polarity) and [eight emotions](https://en.wikipedia.org/wiki/Robert_Plutchik#Plutchik's_wheel_of_emotions): anger, disgust, anticipation , fear, joy, sadness, surprise and trust.
The percentage of sentences in which each emotion appeared is found in the table below.
| | anger | disgust | expectation | fear | happy | sadness | surprise | trust | sentiment |
|------:|------:|--------:|------------:|-----:|------:|--------:|---------:|------:|-----------|
| **ratio** | 0.78 | 0.83 | 0.58 | 0.45 | 0.12 | 0.59 | 0.17 | 0.11 | 0.25 |
## Performance
### Emotion Recognition
| emotion | f1-score | precision | recall |
|-------------|----------|-----------|----------|
| anger | 0.96 | 0.99 | 0.93 |
| disgust | 0.97 | 0.98 | 0.96 |
|anticipation | 0.82 | 0.80 | 0.87 |
| fear | 0.79 | 0.88 | 0.72 |
| joy | 0.90 | 0.97 | 0.84 |
| sadness | 0.90 | 0.86 | 0.94 |
| surprise | 0.40 | 0.44 | 0.37 |
| trust | 0.83 | 0.86 | 0.80 |
*The above metrics is for positive class (meaning, the emotion is reflected in the text).*
### Sentiment (Polarity) Analysis
| | precision | recall | f1-score |
|--------------|-----------|--------|----------|
| neutral | 0.83 | 0.56 | 0.67 |
| positive | 0.96 | 0.92 | 0.94 |
| negative | 0.97 | 0.99 | 0.98 |
| accuracy | | | 0.97 |
| macro avg | 0.92 | 0.82 | 0.86 |
| weighted avg | 0.96 | 0.97 | 0.96 |
*Sentiment (polarity) analysis model is also available on AWS! for more information visit [AWS' git](https://github.com/aws-samples/aws-lambda-docker-serverless-inference/tree/main/hebert-sentiment-analysis-inference-docker-lambda)*
## How to use
### Emotion Recognition Model
An online model can be found at [huggingface spaces](https://huggingface.co/spaces/avichr/HebEMO_demo) or as [colab notebook](https://colab.research.google.com/drive/1Jw3gOWjwVMcZslu-ttXoNeD17lms1-ff?usp=sharing)
```
# !pip install pyplutchik==0.0.7
# !pip install transformers==4.14.1
!git clone https://github.com/avichaychriqui/HeBERT.git
from HeBERT.src.HebEMO import *
HebEMO_model = HebEMO()
HebEMO_model.hebemo(input_path = 'data/text_example.txt')
# return analyzed pandas.DataFrame
hebEMO_df = HebEMO_model.hebemo(text='החיים יפים ומאושרים', plot=True)
```
<img src="https://github.com/avichaychriqui/HeBERT/blob/main/data/hebEMO1.png?raw=true" width="300" height="300" />
### For sentiment classification model (polarity ONLY):
from transformers import AutoTokenizer, AutoModel, pipeline
tokenizer = AutoTokenizer.from_pretrained("avichr/heBERT_sentiment_analysis") #same as 'avichr/heBERT' tokenizer
model = AutoModel.from_pretrained("avichr/heBERT_sentiment_analysis")
# how to use?
sentiment_analysis = pipeline(
"sentiment-analysis",
model="avichr/heBERT_sentiment_analysis",
tokenizer="avichr/heBERT_sentiment_analysis",
return_all_scores = True
)
sentiment_analysis('אני מתלבט מה לאכול לארוחת צהריים')
>>> [[{'label': 'neutral', 'score': 0.9978172183036804},
>>> {'label': 'positive', 'score': 0.0014792329166084528},
>>> {'label': 'negative', 'score': 0.0007035882445052266}]]
sentiment_analysis('קפה זה טעים')
>>> [[{'label': 'neutral', 'score': 0.00047328314394690096},
>>> {'label': 'possitive', 'score': 0.9994067549705505},
>>> {'label': 'negetive', 'score': 0.00011996887042187154}]]
sentiment_analysis('אני לא אוהב את העולם')
>>> [[{'label': 'neutral', 'score': 9.214012970915064e-05},
>>> {'label': 'possitive', 'score': 8.876807987689972e-05},
>>> {'label': 'negetive', 'score': 0.9998190999031067}]]
## Contact us
[Avichay Chriqui](mailto:avichayc@mail.tau.ac.il) <br>
[Inbal yahav](mailto:inbalyahav@tauex.tau.ac.il) <br>
The Coller Semitic Languages AI Lab <br>
Thank you, תודה, شكرا <br>
## If you used this model please cite us as :
Chriqui, A., & Yahav, I. (2022). HeBERT & HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition. INFORMS Journal on Data Science, forthcoming.
```
@article{chriqui2021hebert,
title={HeBERT \& HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition},
author={Chriqui, Avihay and Yahav, Inbal},
journal={INFORMS Journal on Data Science},
year={2022}
}
```
|
roa7n/gpt2-human_nontata_promoters
|
roa7n
| 2023-07-17T10:01:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T10:01:33Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
msrtoto/Coral_TB_2
|
msrtoto
| 2023-07-17T09:50:12Z | 237 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-17T09:50:06Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Coral_TB_2
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9777777791023254
---
# Coral_TB_2
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### bear

#### beaver

#### bird

#### cat

#### dog

#### human

#### lynx

#### wolf

|
bagassword21/mywa
|
bagassword21
| 2023-07-17T09:49:27Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-17T09:48:53Z |
---
license: creativeml-openrail-m
---
|
bl4dylion/faster-whisper-small-belarusian
|
bl4dylion
| 2023-07-17T09:41:14Z | 18 | 2 |
transformers
|
[
"transformers",
"audio",
"automatic-speech-recognition",
"be",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-14T09:43:57Z |
---
license: apache-2.0
tags:
- audio
- automatic-speech-recognition
language:
- be
pipeline_tag: automatic-speech-recognition
---
# Whisper small model for CTranslate2
This repository contains the conversion of [ales/whisper-small-belarusian](https://huggingface.co/ales/whisper-small-belarusian) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper).
## Install faster-whisper
```bash
pip install git+https://github.com/guillaumekln/faster-whisper.git
```
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("bl4dylion/faster-whisper-small-belarusian")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model ales/whisper-small-belarusian --output_dir faster-whisper-small-belarusian \
--copy_files tokenizer_config.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/ales/whisper-small-belarusian).**
|
TheUpperCaseGuy/Guy-Urdu-TTS
|
TheUpperCaseGuy
| 2023-07-17T09:34:18Z | 203 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-17T09:23:10Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Guy-Urdu-TTS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Guy-Urdu-TTS
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3000
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Aditya78b/my-awesome-model-new
|
Aditya78b
| 2023-07-17T09:28:38Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T09:27:56Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
peterdamn/distil-ast-audioset-finetuned-gtzan-finetuned-gtzan
|
peterdamn
| 2023-07-17T09:25:45Z | 166 | 0 |
transformers
|
[
"transformers",
"pytorch",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-17T07:43:01Z |
---
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distil-ast-audioset-finetuned-gtzan-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil-ast-audioset-finetuned-gtzan-finetuned-gtzan
This model is a fine-tuned version of [peterdamn/distil-ast-audioset-finetuned-gtzan](https://huggingface.co/peterdamn/distil-ast-audioset-finetuned-gtzan) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8269
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2642 | 1.0 | 225 | 1.0594 | 0.8 |
| 0.1655 | 2.0 | 450 | 0.9670 | 0.84 |
| 0.0009 | 3.0 | 675 | 0.9774 | 0.79 |
| 0.0093 | 4.0 | 900 | 0.9330 | 0.83 |
| 0.0 | 5.0 | 1125 | 0.8269 | 0.84 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
akdeniz27/taxi-v3
|
akdeniz27
| 2023-07-17T09:22:57Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T09:22:52Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="akdeniz27/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
SotirisLegkas/Socratic-GODEL-2
|
SotirisLegkas
| 2023-07-17T09:21:47Z | 96 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-14T17:16:26Z |
Instruction: given a context, respond using Socratic dialogue principles by asking questions, considering various viewpoints, and promoting critical thinking.
|
Sindy11/squad-bloom-3b
|
Sindy11
| 2023-07-17T09:09:02Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T09:09:00Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
ykirpichev/speecht5_finetuned_voxpopuli_fr
|
ykirpichev
| 2023-07-17T09:02:15Z | 84 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"text-to-speech",
"generated_from_trainer",
"dataset:facebook/voxpopuli-fr",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-07-17T07:04:40Z |
---
license: mit
tags:
- text-to-speech
- generated_from_trainer
datasets:
- facebook/voxpopuli-fr
model-index:
- name: speecht5_finetuned_voxpopuli_fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_fr
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli-fr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5294 | 2.99 | 1000 | 0.4842 |
| 0.5094 | 5.98 | 2000 | 0.4688 |
| 0.5032 | 8.97 | 3000 | 0.4636 |
| 0.4981 | 11.96 | 4000 | 0.4623 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MatthisHoules/t5-large-finetuned-break-qdmr-decomposition
|
MatthisHoules
| 2023-07-17T08:56:04Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:break_data",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-02T17:43:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- break_data
metrics:
- bleu
model-index:
- name: t5-large-finetuned-break-qdmr-decomposition
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: break_data
type: break_data
config: QDMR
split: validation
args: QDMR
metrics:
- name: Bleu
type: bleu
value: 0.22169382457557757
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-large-finetuned-break-qdmr-decomposition
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the break_data dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1729
- Bleu: 0.2217
- Brevity Penalty: 0.2926
- Length Ratio: 0.4487
- Translation Length: 108954
- Reference Length: 242845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Brevity Penalty | Length Ratio | Translation Length | Reference Length |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------------:|:------------:|:------------------:|:----------------:|
| No log | 1.0 | 346 | 0.2217 | 0.2190 | 0.2973 | 0.4519 | 109738 | 242845 |
| 0.3597 | 2.0 | 692 | 0.1898 | 0.2213 | 0.2944 | 0.4499 | 109245 | 242845 |
| 0.1943 | 3.0 | 1038 | 0.1780 | 0.2213 | 0.2936 | 0.4494 | 109125 | 242845 |
| 0.1943 | 4.0 | 1385 | 0.1722 | 0.2209 | 0.2926 | 0.4486 | 108943 | 242845 |
| 0.1588 | 5.0 | 1731 | 0.1708 | 0.2221 | 0.2938 | 0.4495 | 109159 | 242845 |
| 0.1395 | 6.0 | 2077 | 0.1699 | 0.2209 | 0.2907 | 0.4473 | 108635 | 242845 |
| 0.1395 | 7.0 | 2423 | 0.1699 | 0.2219 | 0.2927 | 0.4487 | 108964 | 242845 |
| 0.1245 | 8.0 | 2770 | 0.1717 | 0.2215 | 0.2924 | 0.4485 | 108909 | 242845 |
| 0.1152 | 9.0 | 3116 | 0.1724 | 0.2215 | 0.2924 | 0.4485 | 108914 | 242845 |
| 0.1152 | 9.99 | 3460 | 0.1729 | 0.2217 | 0.2926 | 0.4487 | 108954 | 242845 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
fadliaulawi/distilbert-base-uncased-finetuned-imdb
|
fadliaulawi
| 2023-07-17T08:42:33Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-17T07:19:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7087 | 1.0 | 157 | 2.4899 |
| 2.5798 | 2.0 | 314 | 2.4231 |
| 2.5271 | 3.0 | 471 | 2.4356 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
uzenhuang/distilgpt2-finetuned-wikitext2
|
uzenhuang
| 2023-07-17T08:40:04Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T08:42:11Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7578 | 1.0 | 2334 | 3.6665 |
| 3.6405 | 2.0 | 4668 | 3.6480 |
| 3.5943 | 3.0 | 7002 | 3.6424 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ITG/wav2vec2-large-xlsr-gl
|
ITG
| 2023-07-17T08:35:55Z | 78 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ITG",
"PyTorch",
"Transformers",
"gl",
"dataset:openslr",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-17T08:15:40Z |
---
license: cc-by-nc-nd-4.0
datasets:
- openslr
language:
- gl
pipeline_tag: automatic-speech-recognition
tags:
- ITG
- PyTorch
- Transformers
- wav2vec2
---
# Wav2Vec2 Large XLSR Galician
## Description
This is a fine-tuned version of the [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) pre-trained model for ASR in galician.
---
## Dataset
The dataset used for fine-tuning this model was the [OpenSLR galician](https://huggingface.co/datasets/openslr/viewer/SLR77) dataset, available in the openslr repository.
---
## Example inference script
### Check this example script to run our model in inference mode
```python
import torch
from transformers import AutoProcessor, AutoModelForCTC
filename = "demo.wav" #change this line to the name of your audio file
sample_rate = 16_000
processor = AutoProcessor.from_pretrained('ITG/wav2vec2-large-xlsr-gl')
model = AutoModelForSpeechSeq2Seq.from_pretrained('ITG/wav2vec2-large-xlsr-gl')
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
speech_array, _ = librosa.load(filename, sr=sample_rate)
inputs = processor(speech_array, sampling_rate=sample_rate, return_tensors="pt", padding=True).to(device)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask.to(device)).logits
decode_output = processor.batch_decode(torch.argmax(logits, dim=-1))[0]
print(f"ASR Galician wav2vec2-large-xlsr output: {decode_output}")
```
---
## Fine-tuning hyper-parameters
| **Hyper-parameter** | **Value** |
|:----------------------------------------:|:---------------------------:|
| Training batch size | 16 |
| Evaluation batch size | 8 |
| Learning rate | 3e-4 |
| Gradient accumulation steps | 2 |
| Group by length | true |
| Evaluation strategy | steps |
| Max training epochs | 50 |
| Max steps | 4000 |
| Generate max length | 225 |
| FP16 | true |
| Metric for best model | wer |
| Greater is better | false |
## Fine-tuning in a different dataset or style
If you're interested in fine-tuning your own wav2vec2 model, we suggest starting with the [facebook/wav2vec2-large-xlsr-53 model](https://huggingface.co/facebook/wav2vec2-large-xlsr-53). Additionally,
you may find this [fine-tuning on galician notebook by Diego Fustes](https://github.com/diego-fustes/xlsr-fine-tuning-gl/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Galician.ipynb) to be a valuable resource.
This guide served as a helpful reference during the training process of this Galician wav2vec2-large-xlsr model!
|
NasimB/cbt-rarity-all-guten-rarity-all-end-19k-mixed
|
NasimB
| 2023-07-17T08:35:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T06:37:33Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-rarity-all-guten-rarity-all-end-19k-mixed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-rarity-all-guten-rarity-all-end-19k-mixed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7045 | 0.29 | 500 | 5.6303 |
| 5.3451 | 0.59 | 1000 | 5.2024 |
| 4.993 | 0.88 | 1500 | 4.9525 |
| 4.7145 | 1.17 | 2000 | 4.7988 |
| 4.5613 | 1.47 | 2500 | 4.6763 |
| 4.4489 | 1.76 | 3000 | 4.5785 |
| 4.3287 | 2.05 | 3500 | 4.4979 |
| 4.1353 | 2.35 | 4000 | 4.4492 |
| 4.1069 | 2.64 | 4500 | 4.3901 |
| 4.0676 | 2.93 | 5000 | 4.3409 |
| 3.8575 | 3.23 | 5500 | 4.3364 |
| 3.8071 | 3.52 | 6000 | 4.3043 |
| 3.7948 | 3.81 | 6500 | 4.2695 |
| 3.6747 | 4.11 | 7000 | 4.2699 |
| 3.5247 | 4.4 | 7500 | 4.2635 |
| 3.5208 | 4.69 | 8000 | 4.2499 |
| 3.5068 | 4.99 | 8500 | 4.2371 |
| 3.3383 | 5.28 | 9000 | 4.2509 |
| 3.332 | 5.58 | 9500 | 4.2494 |
| 3.3304 | 5.87 | 10000 | 4.2487 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
madoe001/a2c-PandaReachDense-v2
|
madoe001
| 2023-07-17T08:27:55Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T08:25:09Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.85 +/- 0.24
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MelindaStudy/sd-class-butterflies-32
|
MelindaStudy
| 2023-07-17T08:16:47Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-07-17T08:16:17Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('MelindaStudy/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
msrtoto/Coral_AI_TB
|
msrtoto
| 2023-07-17T08:15:56Z | 237 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-17T08:15:50Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Coral_AI_TB
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9821428656578064
---
# Coral_AI_TB
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Bird

#### Human

#### Lynx

#### Squirrel

#### Wolf

|
ailabturkiye/Kibariye
|
ailabturkiye
| 2023-07-17T08:10:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-17T07:07:21Z |
[](discord.gg/ailab)


# Kibariye - RVC V2 - Mangio Crepe - 200 Epoch
**Şarkıcı Kibariye`nin ses modelidir,
Rvc V2 200 epoch olarak eğitilmiştir.**
**22 Dakikalık Dataset Kullanılmıştır.**
_Dataset ve Train Benim Tarafımdan yapılmıştır.._
__Modelin izinsiz bir şekilde [Ai Lab Discord](discord.gg/ailab) Sunucusu dışında paylaşılması tamamen yasaktır, model openrail lisansına sahiptir.__
## Credits
**Herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur.**
- Discord: tahaefe.ipekk
- Reddit: u/jackk_m
- YouTube: 𝖏𝖆𝖈𝖐𝖘𝖑𝖜𝖐 (https://www.youtube.com/channel/UCZSMJToEeMuqMFDL318v3Xw)
- TikTok: jackss.aep (https://www.tiktok.com/@jackss.aep)
- Instagram: jackslwk (https://www.instagram.com/jackslwk/)

[](discord.gg/ailab)

|
rtyui123/CartPole-v1
|
rtyui123
| 2023-07-17T08:03:51Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T08:03:46Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 124.50 +/- 5.70
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ratishsp/Centrum-Large
|
ratishsp
| 2023-07-17T07:58:32Z | 92 | 0 |
transformers
|
[
"transformers",
"pytorch",
"led",
"text2text-generation",
"generated_from_trainer",
"dataset:ratishsp/newshead",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-17T07:32:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- ratishsp/newshead
model-index:
- name: Centrum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Centrum
Centrum is a pretrained model for multi-document summarization, trained with centroid-based pretraining objective on the NewSHead dataset. It is initialized from allenai/led-large-16384. The details of the approach are mentioned in the ACL 2023 Multi-Document Summarization with Centroid-Based Pretraining (Ratish Puduppully, Parag Jain, Nancy F. Chen and Mark Steedman). It achieves the following results on the evaluation set:
- Loss: 3.3292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- training_steps: 100000
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 3.7884 | 0.05 | 500 | 3.7054 |
| 3.6593 | 0.09 | 1000 | 3.6245 |
| 3.6425 | 0.14 | 1500 | 3.5841 |
| 3.6008 | 0.19 | 2000 | 3.5561 |
| 3.5645 | 0.23 | 2500 | 3.5372 |
| 3.568 | 0.28 | 3000 | 3.5187 |
| 3.5408 | 0.32 | 3500 | 3.5045 |
| 3.5447 | 0.37 | 4000 | 3.4951 |
| 3.5324 | 0.42 | 4500 | 3.4845 |
| 3.5192 | 0.46 | 5000 | 3.4739 |
| 3.4841 | 0.51 | 5500 | 3.4684 |
| 3.4703 | 0.56 | 6000 | 3.4604 |
| 3.4759 | 0.6 | 6500 | 3.4534 |
| 3.4647 | 0.65 | 7000 | 3.4476 |
| 3.4726 | 0.7 | 7500 | 3.4399 |
| 3.4522 | 0.74 | 8000 | 3.4332 |
| 3.4454 | 0.79 | 8500 | 3.4277 |
| 3.4281 | 0.83 | 9000 | 3.4229 |
| 3.4341 | 0.88 | 9500 | 3.4173 |
| 3.4563 | 0.93 | 10000 | 3.4161 |
| 3.4188 | 0.97 | 10500 | 3.4094 |
| 3.3967 | 1.02 | 11000 | 3.4123 |
| 3.3647 | 1.07 | 11500 | 3.4061 |
| 3.3604 | 1.11 | 12000 | 3.4011 |
| 3.3662 | 1.16 | 12500 | 3.4011 |
| 3.3698 | 1.21 | 13000 | 3.3918 |
| 3.3558 | 1.25 | 13500 | 3.3910 |
| 3.3421 | 1.3 | 14000 | 3.3891 |
| 3.3468 | 1.34 | 14500 | 3.3894 |
| 3.3333 | 1.39 | 15000 | 3.3817 |
| 3.3545 | 1.44 | 15500 | 3.3803 |
| 3.3411 | 1.48 | 16000 | 3.3784 |
| 3.3338 | 1.53 | 16500 | 3.3782 |
| 3.3354 | 1.58 | 17000 | 3.3749 |
| 3.3341 | 1.62 | 17500 | 3.3714 |
| 3.3302 | 1.67 | 18000 | 3.3677 |
| 3.3179 | 1.71 | 18500 | 3.3659 |
| 3.3381 | 1.76 | 19000 | 3.3645 |
| 3.3223 | 1.81 | 19500 | 3.3619 |
| 3.3079 | 1.85 | 20000 | 3.3593 |
| 3.3156 | 1.9 | 20500 | 3.3576 |
| 3.3056 | 1.95 | 21000 | 3.3582 |
| 3.3117 | 1.99 | 21500 | 3.3552 |
| 3.2522 | 2.04 | 22000 | 3.3550 |
| 3.2522 | 2.09 | 22500 | 3.3586 |
| 3.2386 | 2.13 | 23000 | 3.3548 |
| 3.2574 | 2.18 | 23500 | 3.3544 |
| 3.239 | 2.22 | 24000 | 3.3566 |
| 3.2468 | 2.27 | 24500 | 3.3528 |
| 3.2264 | 2.32 | 25000 | 3.3511 |
| 3.2501 | 2.36 | 25500 | 3.3482 |
| 3.2204 | 2.41 | 26000 | 3.3506 |
| 3.2302 | 2.46 | 26500 | 3.3526 |
| 3.2353 | 2.5 | 27000 | 3.3492 |
| 3.2494 | 2.55 | 27500 | 3.3452 |
| 3.2423 | 2.6 | 28000 | 3.3455 |
| 3.2233 | 2.64 | 28500 | 3.3447 |
| 3.2498 | 2.69 | 29000 | 3.3420 |
| 3.2175 | 2.73 | 29500 | 3.3457 |
| 3.2398 | 2.78 | 30000 | 3.3402 |
| 3.2242 | 2.83 | 30500 | 3.3421 |
| 3.2185 | 2.87 | 31000 | 3.3457 |
| 3.2274 | 2.92 | 31500 | 3.3419 |
| 3.2251 | 2.97 | 32000 | 3.3449 |
| 3.1507 | 3.01 | 32500 | 3.3518 |
| 3.165 | 3.06 | 33000 | 3.3462 |
| 3.1512 | 3.11 | 33500 | 3.3434 |
| 3.1598 | 3.15 | 34000 | 3.3433 |
| 3.1728 | 3.2 | 34500 | 3.3445 |
| 3.1838 | 3.24 | 35000 | 3.3456 |
| 3.1649 | 3.29 | 35500 | 3.3442 |
| 3.1684 | 3.34 | 36000 | 3.3404 |
| 3.1587 | 3.38 | 36500 | 3.3406 |
| 3.1586 | 3.43 | 37000 | 3.3442 |
| 3.1545 | 3.48 | 37500 | 3.3381 |
| 3.1674 | 3.52 | 38000 | 3.3436 |
| 3.1717 | 3.57 | 38500 | 3.3373 |
| 3.147 | 3.62 | 39000 | 3.3408 |
| 3.1462 | 3.66 | 39500 | 3.3374 |
| 3.156 | 3.71 | 40000 | 3.3382 |
| 3.1354 | 3.75 | 40500 | 3.3366 |
| 3.1613 | 3.8 | 41000 | 3.3317 |
| 3.143 | 3.85 | 41500 | 3.3347 |
| 3.1667 | 3.89 | 42000 | 3.3353 |
| 3.1597 | 3.94 | 42500 | 3.3341 |
| 3.1566 | 3.99 | 43000 | 3.3357 |
| 3.124 | 4.03 | 43500 | 3.3410 |
| 3.1035 | 4.08 | 44000 | 3.3434 |
| 3.0881 | 4.12 | 44500 | 3.3411 |
| 3.1131 | 4.17 | 45000 | 3.3379 |
| 3.1191 | 4.22 | 45500 | 3.3468 |
| 3.1119 | 4.26 | 46000 | 3.3356 |
| 3.0957 | 4.31 | 46500 | 3.3417 |
| 3.1024 | 4.36 | 47000 | 3.3380 |
| 3.1141 | 4.4 | 47500 | 3.3472 |
| 3.0851 | 4.45 | 48000 | 3.3513 |
| 3.1252 | 4.5 | 48500 | 3.3351 |
| 3.1125 | 4.54 | 49000 | 3.3423 |
| 3.1019 | 4.59 | 49500 | 3.3396 |
| 3.1185 | 4.63 | 50000 | 3.3349 |
| 3.1042 | 4.68 | 50500 | 3.3350 |
| 3.1153 | 4.73 | 51000 | 3.3345 |
| 3.1289 | 4.77 | 51500 | 3.3356 |
| 3.1075 | 4.82 | 52000 | 3.3335 |
| 3.1151 | 4.87 | 52500 | 3.3385 |
| 3.094 | 4.91 | 53000 | 3.3292 |
| 3.1272 | 4.96 | 53500 | 3.3349 |
| 3.0847 | 5.01 | 54000 | 3.3407 |
| 3.0662 | 5.05 | 54500 | 3.3378 |
| 3.0345 | 5.1 | 55000 | 3.3481 |
| 3.0611 | 5.14 | 55500 | 3.3410 |
| 3.0566 | 5.19 | 56000 | 3.3424 |
| 3.0413 | 5.24 | 56500 | 3.3466 |
| 3.0291 | 5.28 | 57000 | 3.3453 |
| 3.0569 | 5.33 | 57500 | 3.3491 |
| 3.0645 | 5.38 | 58000 | 3.3378 |
| 3.0646 | 5.42 | 58500 | 3.3434 |
| 3.045 | 5.47 | 59000 | 3.3418 |
| 3.0551 | 5.52 | 59500 | 3.3426 |
| 3.0706 | 5.56 | 60000 | 3.3378 |
| 3.0556 | 5.61 | 60500 | 3.3407 |
| 3.0743 | 5.65 | 61000 | 3.3520 |
| 3.0764 | 5.7 | 61500 | 3.3320 |
| 3.0723 | 5.75 | 62000 | 3.3352 |
| 3.0716 | 5.79 | 62500 | 3.3327 |
| 3.0618 | 5.84 | 63000 | 3.3447 |
| 3.0662 | 5.89 | 63500 | 3.3312 |
| 3.0758 | 5.93 | 64000 | 3.3323 |
| 3.0501 | 5.98 | 64500 | 3.3400 |
| 2.978 | 6.03 | 65000 | 3.3473 |
| 3.0131 | 6.07 | 65500 | 3.3440 |
| 3.0212 | 6.12 | 66000 | 3.3401 |
| 3.0095 | 6.16 | 66500 | 3.3361 |
| 3.0118 | 6.21 | 67000 | 3.3352 |
| 3.0249 | 6.26 | 67500 | 3.3398 |
| 3.0107 | 6.3 | 68000 | 3.3444 |
| 3.0175 | 6.35 | 68500 | 3.3490 |
| 3.0241 | 6.4 | 69000 | 3.3402 |
| 3.0094 | 6.44 | 69500 | 3.3437 |
| 3.0286 | 6.49 | 70000 | 3.3355 |
| 3.0391 | 6.54 | 70500 | 3.3385 |
| 3.0243 | 6.58 | 71000 | 3.3395 |
| 3.0232 | 6.63 | 71500 | 3.3370 |
| 3.0168 | 6.67 | 72000 | 3.3458 |
| 3.0432 | 6.72 | 72500 | 3.3400 |
| 3.0121 | 6.77 | 73000 | 3.3420 |
| 3.0137 | 6.81 | 73500 | 3.3436 |
| 3.0333 | 6.86 | 74000 | 3.3362 |
| 3.0194 | 6.91 | 74500 | 3.3355 |
| 3.0198 | 6.95 | 75000 | 3.3434 |
| 3.0105 | 7.0 | 75500 | 3.3346 |
| 2.9833 | 7.04 | 76000 | 3.3492 |
| 2.9876 | 7.09 | 76500 | 3.3351 |
| 2.9918 | 7.14 | 77000 | 3.3466 |
| 2.9983 | 7.18 | 77500 | 3.3422 |
| 2.9893 | 7.23 | 78000 | 3.3364 |
| 2.9946 | 7.28 | 78500 | 3.3365 |
| 2.9851 | 7.32 | 79000 | 3.3402 |
| 2.9797 | 7.37 | 79500 | 3.3450 |
| 2.9888 | 7.42 | 80000 | 3.3423 |
| 3.0182 | 7.46 | 80500 | 3.3429 |
| 2.983 | 7.51 | 81000 | 3.3345 |
| 2.9959 | 7.55 | 81500 | 3.3397 |
| 2.9935 | 7.6 | 82000 | 3.3389 |
| 3.0008 | 7.65 | 82500 | 3.3442 |
| 2.9898 | 7.69 | 83000 | 3.3418 |
| 2.9989 | 7.74 | 83500 | 3.3387 |
| 2.985 | 7.79 | 84000 | 3.3482 |
| 2.963 | 7.83 | 84500 | 3.3369 |
| 3.0009 | 7.88 | 85000 | 3.3355 |
| 2.9925 | 7.93 | 85500 | 3.3434 |
| 2.9616 | 7.97 | 86000 | 3.3346 |
| 2.9769 | 8.02 | 86500 | 3.3430 |
| 2.9663 | 8.06 | 87000 | 3.3407 |
| 2.9872 | 8.11 | 87500 | 3.3448 |
| 2.9892 | 8.16 | 88000 | 3.3354 |
| 2.9526 | 8.2 | 88500 | 3.3445 |
| 2.9426 | 8.25 | 89000 | 3.3405 |
| 2.9528 | 8.3 | 89500 | 3.3466 |
| 2.9541 | 8.34 | 90000 | 3.3434 |
| 2.9643 | 8.39 | 90500 | 3.3475 |
| 2.9893 | 8.44 | 91000 | 3.3434 |
| 2.9655 | 8.48 | 91500 | 3.3433 |
| 2.9735 | 8.53 | 92000 | 3.3416 |
| 2.9722 | 8.57 | 92500 | 3.3443 |
| 2.9639 | 8.62 | 93000 | 3.3410 |
| 2.972 | 8.67 | 93500 | 3.3407 |
| 2.9586 | 8.71 | 94000 | 3.3393 |
| 2.9591 | 8.76 | 94500 | 3.3412 |
| 2.9523 | 8.81 | 95000 | 3.3411 |
| 2.9572 | 8.85 | 95500 | 3.3393 |
| 2.9435 | 8.9 | 96000 | 3.3414 |
| 2.9667 | 8.95 | 96500 | 3.3392 |
| 2.9824 | 8.99 | 97000 | 3.3428 |
| 2.9265 | 9.04 | 97500 | 3.3417 |
| 2.9409 | 9.08 | 98000 | 3.3435 |
| 2.9387 | 9.13 | 98500 | 3.3425 |
| 2.9635 | 9.18 | 99000 | 3.3420 |
| 2.9527 | 9.22 | 99500 | 3.3421 |
| 2.9755 | 9.27 | 100000 | 3.3430 |
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
EhsanElahi/speecht5_finetuned_voxpopuli_nl
|
EhsanElahi
| 2023-07-17T07:48:50Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:common_voice_13_0",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-14T12:16:23Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- common_voice_13_0
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5771 | 8.61 | 1000 | 0.5219 |
| 0.5411 | 17.22 | 2000 | 0.5064 |
| 0.5352 | 25.83 | 3000 | 0.5012 |
| 0.5324 | 34.45 | 4000 | 0.5015 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
netrough/new-data-model
|
netrough
| 2023-07-17T07:42:24Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-07-17T07:36:52Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model card aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v2
|
hafidikhsan
| 2023-07-17T07:14:50Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-17T07:12:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v2
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8697
- Accuracy: 0.78
- F1: 0.7738
- Precision: 0.7735
- Recall: 0.78
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.0774 | 1.0 | 500 | 0.9199 | 0.57 | 0.5728 | 0.6154 | 0.57 |
| 0.6526 | 2.0 | 1000 | 0.6857 | 0.7 | 0.6925 | 0.7167 | 0.7 |
| 0.3767 | 3.0 | 1500 | 0.5830 | 0.79 | 0.7887 | 0.7884 | 0.79 |
| 0.242 | 4.0 | 2000 | 0.7786 | 0.82 | 0.8160 | 0.8163 | 0.82 |
| 0.2691 | 5.0 | 2500 | 0.8399 | 0.814 | 0.8113 | 0.8109 | 0.814 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ailabturkiye/shaco
|
ailabturkiye
| 2023-07-17T06:35:20Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-17T06:30:09Z |
---
license: openrail
language:
- tr
tags:
- music
---
League of Legends oyunundaki Shaco adlı şampiyonun yaklaşık 5 dakikalık datasetiyle 250 epoch basılarak oluşturulmuştur. -3 ya da -5 Pitch(Transpose) önerilir. Herhangi bir platformda model ile yapılan bir cover paylaşımında discord linkimizi vermeniz rica olunur. discord.gg/ailab
|
ailabturkiye/drmundo
|
ailabturkiye
| 2023-07-17T06:34:42Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-17T06:28:34Z |
---
license: openrail
language:
- tr
tags:
- music
---
League of Legends oyunundaki Dr. Mundo adlı şampiyonun yaklaşık 5 dakikalık datasetiyle 500 epoch basılarak oluşturulmuştur. Herhangi bir platformda model ile yapılan bir cover paylaşımında discord linkimizi vermeniz rica olunur. discord.gg/ailab
|
StarRing2022/RWKV-4-World-3B
|
StarRing2022
| 2023-07-17T06:31:33Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rwkv",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-07-17T00:40:44Z |
---
license: apache-2.0
---
RWKV-4-World的Hugface格式,因新版World的tokenizer较之前Raven\Pile版本有较大变化,因而需要进行新版HF适配
ringrwkv兼容了原生rwkv库和transformers的rwkv库,同时新添入World版本的配置及代码(支持1.5B,3B,7B全系列),并修复了原HF的RWKV在
Forward RWKVOutput时的细微问题,主要是引入和明确last_hidden_state。以下是轻量级使用代码,比较方便:<br>
RingRWKV GIT开源地址:https://github.com/StarRing2022/RingRWKV <br>
import torch<br>
from ringrwkv.configuration_rwkv_world import RwkvConfig<br>
from ringrwkv.rwkv_tokenizer import TRIE_TOKENIZER<br>
from ringrwkv.modehf_world import RwkvForCausalLM<br>
model = RwkvForCausalLM.from_pretrained("StarRing2022/RWKV-4-World-3B") #或将本模型下载至本地文件夹<br>
tokenizer = TRIE_TOKENIZER('./ringrwkv/rwkv_vocab_v20230424.txt')<br>
text = "你叫什么名字?"<br>
question = f'Question: {text.strip()}\n\nAnswer:'<br>
input_ids = tokenizer.encode(question)<br>
input_ids = torch.tensor(input_ids).unsqueeze(0)<br>
out = model.generate(input_ids,max_new_tokens=40)<br><br>
outlist = out[0].tolist()<br>
for i in outlist:<br>
if i==0: #要删除tokenid为0的元素 <br>
outlist.remove(i)<br>
answer = tokenizer.decode(outlist)<br>
print(answer)<br>
|
charlieoneill/falcon-abstracts
|
charlieoneill
| 2023-07-17T06:29:06Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-07-17T00:55:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: falcon-abstracts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-abstracts
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 2500
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ailabturkiye/rtkamil
|
ailabturkiye
| 2023-07-17T06:25:41Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-17T06:21:55Z |
---
license: openrail
language:
- tr
tags:
- music
---
Rafadan Tayfa adlı çizgi filmde sevilen bir karakter olan Kamil'in yaklaşık 3 dakikalık datasetiyle 1000 epoch basılarak oluşturulmuştur. Herhangi bir platformda model ile yapılan bir cover paylaşımında discord linkimizi vermeniz rica olunur. discord.gg/ailab
|
NasimB/cbt-mod-log-rarity-all
|
NasimB
| 2023-07-17T06:22:57Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T04:11:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-mod-log-rarity-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-mod-log-rarity-all
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7026 | 0.29 | 500 | 5.6447 |
| 5.3372 | 0.58 | 1000 | 5.2129 |
| 4.9906 | 0.87 | 1500 | 4.9629 |
| 4.7124 | 1.17 | 2000 | 4.8120 |
| 4.5602 | 1.46 | 2500 | 4.6878 |
| 4.4529 | 1.75 | 3000 | 4.5834 |
| 4.3223 | 2.04 | 3500 | 4.5006 |
| 4.1297 | 2.33 | 4000 | 4.4577 |
| 4.097 | 2.62 | 4500 | 4.3979 |
| 4.0576 | 2.92 | 5000 | 4.3446 |
| 3.8608 | 3.21 | 5500 | 4.3387 |
| 3.7927 | 3.5 | 6000 | 4.3073 |
| 3.7829 | 3.79 | 6500 | 4.2777 |
| 3.6916 | 4.08 | 7000 | 4.2713 |
| 3.5078 | 4.37 | 7500 | 4.2688 |
| 3.5099 | 4.66 | 8000 | 4.2551 |
| 3.4934 | 4.96 | 8500 | 4.2416 |
| 3.3384 | 5.25 | 9000 | 4.2546 |
| 3.3186 | 5.54 | 9500 | 4.2532 |
| 3.3113 | 5.83 | 10000 | 4.2524 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
ailabturkiye/2xciv
|
ailabturkiye
| 2023-07-17T06:22:21Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-07-17T06:16:23Z |
---
license: openrail
language:
- tr
tags:
- music
---
VALORANT youtuberı olan 2xCIV'in yaklaşık 5 dakikalık datasetiyle 250 epoch basılarak oluşturulmuştur. Herhangi bir platformda model ile yapılan bir cover paylaşımında discord linkimizi vermeniz rica olunur. discord.gg/ailab
|
shivaneej/my_awesome_billsum_model
|
shivaneej
| 2023-07-17T06:19:13Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-14T06:38:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1425
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4536
- Rouge1: 0.1425
- Rouge2: 0.051
- Rougel: 0.1174
- Rougelsum: 0.1176
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.7496 | 0.1275 | 0.0381 | 0.1084 | 0.1082 | 19.0 |
| No log | 2.0 | 124 | 2.5353 | 0.1365 | 0.0475 | 0.1138 | 0.1136 | 19.0 |
| No log | 3.0 | 186 | 2.4718 | 0.1409 | 0.0495 | 0.1157 | 0.1156 | 19.0 |
| No log | 4.0 | 248 | 2.4536 | 0.1425 | 0.051 | 0.1174 | 0.1176 | 19.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
StarRing2022/RWKV-4-Raven-3B-v11-zh
|
StarRing2022
| 2023-07-17T06:16:24Z | 98 | 6 |
transformers
|
[
"transformers",
"pytorch",
"rwkv",
"endpoints_compatible",
"region:us"
] | null | 2023-05-23T01:26:32Z |
---
{RWKV-4-Raven-3B-v11-zh}
---
将RWKV模型转化为HF格式,与HF无缝连接,几句代码调用RWKV
底座模型:RWKV-4-Raven-3B-v11-Eng49%-Chn49%-Jpn1%-Other1%-20230429-ctx4096.pth(https://huggingface.co/BlinkDL/rwkv-4-raven)
import torch
from transformers import GPTNeoXTokenizerFast, RwkvConfig, RwkvForCausalLM
model = RwkvForCausalLM.from_pretrained("StarRing2022/RWKV-4-Raven-3B-v11-zh")
tokenizer = GPTNeoXTokenizerFast.from_pretrained("StarRing2022/RWKV-4-Raven-3B-v11-zh")
text = "你好"
input_ids = tokenizer.encode(text, return_tensors='pt')
out = model.generate(input_ids=input_ids,max_new_tokens=128)
answer = tokenizer.decode(out[0])
print(answer)
GIT开源地址:https://github.com/StarRing2022/HF-For-RWKVRaven-Alpaca/
|
kayteekay/jordan-generator-v1
|
kayteekay
| 2023-07-17T06:07:15Z | 127 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-2",
"base_model:adapter:CompVis/stable-diffusion-v1-2",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-17T02:19:36Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-2
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - kayteekay/jordan-generator-v1
These are LoRA adaption weights for CompVis/stable-diffusion-v1-2. The weights were fine-tuned on the kayteekay/jordan-generator-dataset dataset. You can find some example images in the following.




|
Althhecow/CattleMix
|
Althhecow
| 2023-07-17T06:00:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-16T21:23:09Z |
Model based on Anything v3 and a few older models that I've since lost track of. This model was originally mixed over 6 months ago, but has stayed useful for cartoonish / anthropomorphic subjects, despite newer models since releasing.
|
MHRDYN7/distilhubert-finetuned-gtzan
|
MHRDYN7
| 2023-07-17T05:48:16Z | 158 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-17T05:37:35Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hyeongjin99/vit-base-aihub_model-v2
|
hyeongjin99
| 2023-07-17T05:36:33Z | 221 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-17T05:21:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: vit-base-aihub_model-v2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.963855421686747
- name: Precision
type: precision
value: 0.9609609235289817
- name: Recall
type: recall
value: 0.9613676432460462
- name: F1
type: f1
value: 0.9604284776111401
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-aihub_model-v2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3076
- Accuracy: 0.9639
- Precision: 0.9610
- Recall: 0.9614
- F1: 0.9604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 3 | 1.2753 | 0.8373 | 0.8563 | 0.7993 | 0.8022 |
| No log | 2.0 | 6 | 1.1252 | 0.8675 | 0.8895 | 0.8300 | 0.8333 |
| No log | 3.0 | 9 | 0.9427 | 0.8976 | 0.9185 | 0.8696 | 0.8760 |
| 1.1721 | 4.0 | 12 | 0.7995 | 0.9398 | 0.9474 | 0.9195 | 0.9246 |
| 1.1721 | 5.0 | 15 | 0.6820 | 0.9699 | 0.9704 | 0.9613 | 0.9642 |
| 1.1721 | 6.0 | 18 | 0.5927 | 0.9639 | 0.9603 | 0.9583 | 0.9587 |
| 0.7084 | 7.0 | 21 | 0.5239 | 0.9759 | 0.9725 | 0.9729 | 0.9725 |
| 0.7084 | 8.0 | 24 | 0.4743 | 0.9699 | 0.9665 | 0.9671 | 0.9665 |
| 0.7084 | 9.0 | 27 | 0.4436 | 0.9578 | 0.9558 | 0.9556 | 0.9544 |
| 0.4668 | 10.0 | 30 | 0.4070 | 0.9639 | 0.9610 | 0.9614 | 0.9604 |
| 0.4668 | 11.0 | 33 | 0.3817 | 0.9699 | 0.9665 | 0.9671 | 0.9665 |
| 0.4668 | 12.0 | 36 | 0.3625 | 0.9699 | 0.9665 | 0.9671 | 0.9665 |
| 0.4668 | 13.0 | 39 | 0.3536 | 0.9578 | 0.9558 | 0.9556 | 0.9544 |
| 0.3611 | 14.0 | 42 | 0.3384 | 0.9578 | 0.9558 | 0.9556 | 0.9544 |
| 0.3611 | 15.0 | 45 | 0.3249 | 0.9699 | 0.9665 | 0.9671 | 0.9665 |
| 0.3611 | 16.0 | 48 | 0.3164 | 0.9699 | 0.9665 | 0.9671 | 0.9665 |
| 0.3063 | 17.0 | 51 | 0.3142 | 0.9639 | 0.9610 | 0.9614 | 0.9604 |
| 0.3063 | 18.0 | 54 | 0.3122 | 0.9639 | 0.9610 | 0.9614 | 0.9604 |
| 0.3063 | 19.0 | 57 | 0.3093 | 0.9639 | 0.9610 | 0.9614 | 0.9604 |
| 0.294 | 20.0 | 60 | 0.3076 | 0.9639 | 0.9610 | 0.9614 | 0.9604 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
kayteekay/jordan-generator
|
kayteekay
| 2023-07-17T05:28:35Z | 3 | 0 |
diffusers
|
[
"diffusers",
"art",
"lora",
"text-to-image",
"en",
"dataset:kayteekay/jordan-generator-dataset",
"license:openrail",
"region:us"
] |
text-to-image
| 2023-07-17T04:46:12Z |
---
license: openrail
datasets:
- kayteekay/jordan-generator-dataset
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
- lora
---
|
DracoHugging/Distilbert-sentiment-analysis
|
DracoHugging
| 2023-07-17T05:12:38Z | 130 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-05T07:20:41Z |
---
model-index:
- name: DracoHugging/Distilbert-sentiment-analysis
results:
- task:
type: Text Classification # Required. Example: automatic-speech-recognition
name: Sentiment Analysis # Optional. Example: Speech Recognition
dataset:
type: Text-2-Text # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: knkarthick/dialogsum # Required. A pretty name for the dataset. Example: Common Voice (French)
metrics:
- type: Validation Loss # Required. Example: wer. Use metric id from https://hf.co/metrics
value: 1.08 # Required. Example: 20.90
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Distilbert-sentiment-analysis
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1633 | 1.0 | 1178 | 1.1116 |
| 1.0524 | 2.0 | 2356 | 1.0836 |
| 0.9103 | 3.0 | 3534 | 1.1135 |
| 0.7676 | 4.0 | 4712 | 1.1945 |
| 0.659 | 5.0 | 5890 | 1.2745 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v1
|
hafidikhsan
| 2023-07-17T04:48:17Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-17T04:47:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-english-pronunciation-evaluation-bs-v1
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9211
- Accuracy: 0.718
- F1: 0.7197
- Precision: 0.7231
- Recall: 0.718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.9511 | 1.0 | 250 | 0.9034 | 0.548 | 0.5357 | 0.5409 | 0.548 |
| 0.6108 | 2.0 | 500 | 0.7361 | 0.68 | 0.6727 | 0.6731 | 0.68 |
| 0.4412 | 3.0 | 750 | 0.7990 | 0.726 | 0.7188 | 0.7221 | 0.726 |
| 0.2178 | 4.0 | 1000 | 0.7983 | 0.764 | 0.7652 | 0.7674 | 0.764 |
| 0.1726 | 5.0 | 1250 | 0.9572 | 0.764 | 0.7633 | 0.7647 | 0.764 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
StarRing2022/MiLu-GPT
|
StarRing2022
| 2023-07-17T04:47:10Z | 146 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T04:40:47Z |
---
license: apache-2.0
---
# MiLu-GPT
基于GPT2+BERT的语言模型,以少量的纯中文语料从头训练,验证小模型在ChatGPT类似友好能力
GPT2+BERTokenizer从头训练模型(50W闲聊等语料)
环境:<br>
WIN10+Torch1.31+Cuda11.6 <br>
transformer4.29<br>
GIT开源地址:https://github.com/StarRing2022/MiLu-GPT/
|
casque/meichidarkMix_meichidarkMIX38
|
casque
| 2023-07-17T04:39:47Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-17T03:58:55Z |
---
license: creativeml-openrail-m
---
|
DracoHugging/flan-T5-base-sum
|
DracoHugging
| 2023-07-17T04:23:51Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-05T13:58:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: flan-T5-base-sum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: test
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 47.6617
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-T5-base-sum
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3721
- Rouge1: 47.6617
- Rouge2: 23.7647
- Rougel: 40.1155
- Rougelsum: 43.6943
- Gen Len: 17.2759
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.4403 | 1.0 | 1842 | 1.3822 | 47.2814 | 23.7835 | 39.7427 | 43.4897 | 17.0256 |
| 1.3572 | 2.0 | 3684 | 1.3747 | 47.553 | 23.5714 | 39.8212 | 43.6246 | 17.4420 |
| 1.2822 | 3.0 | 5526 | 1.3721 | 47.6617 | 23.7647 | 40.1155 | 43.6943 | 17.2759 |
| 1.2375 | 4.0 | 7368 | 1.3764 | 47.7453 | 24.1099 | 40.1684 | 43.8659 | 17.2943 |
| 1.1935 | 5.0 | 9210 | 1.3780 | 47.614 | 23.6643 | 39.8434 | 43.6558 | 17.3077 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
crumb/opentinystories-30m-base
|
crumb
| 2023-07-17T04:20:30Z | 162 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"en",
"dataset:crumb/flan-ul2-tinystories",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-07T06:29:40Z |
---
license: mit
datasets:
- crumb/flan-ul2-tinystories
language:
- en
---
# Tinystories-30m-UL2
*GPT-4 generated model card*
## Model Details
- **Model Name**: [crumb/opentinystories-30m-base](https://huggingface.co/crumb/opentinystories-30m-base)
- **Model Type**: GPTNeoXForCausalLM
- **Model Training Details**: The model is trained using [crumb/flan-ul2-tinystories](https://huggingface.co/datasets/crumb/flan-ul2-tinystories) which contains around a quarter of a million examples generated from Flan-UL2 (20b) with the prompt "Write a short story using the vocabulary of a first-grader."
## Model Description
This model is trained with the specific purpose of generating short narratives using a vocabulary limited to the level of a first-grader. In terms of complexity and language usage, the model is designed to produce simplistic and easily comprehensible text.
Learning from text generated by Flan-UL2 (20b), the model adopts a simple storyline layout and a minimalistic vocabulary, which it recognizes are easier to learn and replicate.
## Training
The model is trained for four epochs on the [crumb/flan-ul2-tinystories](https://huggingface.co/datasets/crumb/flan-ul2-tinystories) dataset (inspired by [roneneldan/TinyStories](https://huggingface.co/datasets/roneneldan/TinyStories)), created with the help of Flan-UL2 (20b), as opposed to GPT-3.5/4 in the original Tinystories. The data is designed to follow the format of a simple, first-grader-level narrative, which aids the model in learning simple vocabulary and sentence structure.
Training arguments:
```
per_device_train_batch_size=16,
gradient_accumulation_steps=8,
warmup_steps=128,
num_train_epochs=4,
learning_rate=2e-4,
eval_steps=64,
optim="adamw_torch",
```
## Usage
This model serves as a meaningful research tool in exploring the learning tendencies of smaller language models and their ability to grasp simplified language constructs. Its specific training set effectively maps the idea that a constrained vocabulary and simplistic story layouts are inherently easier to learn.
## Validation and Performance
The model's performance was evaluated using a held-out validation set, which constitutes 1% of the original dataset. During evaluation, the model achieved a loss of 2.284920. During training, the model achieved a loss of 2.647377

|
LLschoolJ/ppo-Huggy
|
LLschoolJ
| 2023-07-17T04:14:01Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-17T03:05:26Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: LLschoolJ/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Yaxin1992/llama-33b-qlora-en-pt-es
|
Yaxin1992
| 2023-07-17T04:06:04Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:other",
"region:us"
] | null | 2023-07-16T18:33:36Z |
---
license: other
base_model: decapoda-research/llama-30b-hf
tags:
- generated_from_trainer
model-index:
- name: llama-33b-qlora-en-pt-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-33b-qlora-en-pt-es
This model is a fine-tuned version of [decapoda-research/llama-30b-hf](https://huggingface.co/decapoda-research/llama-30b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3500
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
digiplay/CuriousMerge2.5D_v5
|
digiplay
| 2023-07-17T03:59:30Z | 260 | 8 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-19T13:42:53Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Very beautiful 2.5D text-to-image model,
look like have a soul in the character.
Model info:
https://civitai.com/models/79070?modelVersionId=99101
Sample image I made:

|
cebaker/model
|
cebaker
| 2023-07-17T03:51:27Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T03:51:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
AaAsr/weight
|
AaAsr
| 2023-07-17T03:29:58Z | 1 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-30T02:31:32Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - AaAsr/weight
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
NasimB/cbt-guten-rarity-all-no-cut
|
NasimB
| 2023-07-17T03:25:56Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T01:33:55Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-guten-rarity-all-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-guten-rarity-all-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3145
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7055 | 0.29 | 500 | 5.6348 |
| 5.3382 | 0.58 | 1000 | 5.2106 |
| 5.0043 | 0.87 | 1500 | 4.9625 |
| 4.7284 | 1.17 | 2000 | 4.8138 |
| 4.5737 | 1.46 | 2500 | 4.6845 |
| 4.4625 | 1.75 | 3000 | 4.5821 |
| 4.3417 | 2.04 | 3500 | 4.5000 |
| 4.1458 | 2.33 | 4000 | 4.4552 |
| 4.1103 | 2.62 | 4500 | 4.3967 |
| 4.075 | 2.91 | 5000 | 4.3438 |
| 3.8778 | 3.21 | 5500 | 4.3374 |
| 3.8089 | 3.5 | 6000 | 4.3042 |
| 3.7987 | 3.79 | 6500 | 4.2728 |
| 3.7134 | 4.08 | 7000 | 4.2660 |
| 3.5302 | 4.37 | 7500 | 4.2613 |
| 3.5237 | 4.66 | 8000 | 4.2464 |
| 3.5142 | 4.95 | 8500 | 4.2344 |
| 3.3667 | 5.24 | 9000 | 4.2470 |
| 3.3384 | 5.54 | 9500 | 4.2447 |
| 3.3305 | 5.83 | 10000 | 4.2444 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
uzenhuang/distilgpt2-finetuned-wikitext2-test
|
uzenhuang
| 2023-07-17T03:22:43Z | 213 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T03:03:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2-test
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 277 | 3.8379 |
| 3.8669 | 2.0 | 554 | 3.8250 |
| 3.8669 | 3.0 | 831 | 3.8267 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
gyuri2020/kw-classification-setfit-model
|
gyuri2020
| 2023-07-17T03:17:50Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-14T14:50:06Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# gyuri2020/kw-classification-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("gyuri2020/kw-classification-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
huolongguo10/check_sec
|
huolongguo10
| 2023-07-17T03:00:12Z | 144 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"code",
"en",
"dataset:huolongguo10/insecure",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-29T05:14:01Z |
---
license: openrail
datasets:
- huolongguo10/insecure
language:
- en
library_name: transformers
pipeline_tag: text-classification
tags:
- code
---
# check_sec
检查web参数安全性,支持多种payload(v0.1.2)
注意:该版本不再维护,请使用tiny版。
## 类型
```
LABEL_0: secure
LABEL_1: insecure(可能包含payload)
```
## 使用
```python
import transformers
from transformers import BertTokenizer, DataCollatorWithPadding
from transformers import AutoModelForSequenceClassification
tokenizer = BertTokenizer.from_pretrained('huolongguo10/check_sec_tiny')
model = AutoModelForSequenceClassification.from_pretrained('huolongguo10/check_sec_tiny', num_labels=2)
import torch
def check(text):
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
print(f'{logits.argmax().item()}:{text}')
return 'secure' if predicted_class_id==0 else 'insecure'
```
|
dariowsz/whisper-tiny-finetuned-minds-14
|
dariowsz
| 2023-07-17T02:53:30Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-11T13:13:49Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-finetuned-minds-14
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: MInDS 14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.35465116279070
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-finetuned-minds-14
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the MInDS 14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7154
- Wer Ortho: 0.3540
- Wer: 0.3547
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.0007 | 17.86 | 500 | 0.7154 | 0.3540 | 0.3547 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dariowsz/speecht5-base-finetuned-lj-speech
|
dariowsz
| 2023-07-17T02:43:57Z | 91 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:lj_speech",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-07-13T17:22:15Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- lj_speech
model-index:
- name: speecht5-base-finetuned-lj-speech
results: []
pipeline_tag: text-to-speech
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5-base-finetuned-lj-speech
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the lj_speech dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 125
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4544 | 0.68 | 250 | 0.4076 |
| 0.4435 | 1.36 | 500 | 0.3966 |
| 0.4393 | 2.04 | 750 | 0.3930 |
| 0.4322 | 2.71 | 1000 | 0.3929 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
lucostiguy11/dreambooth_if_1
|
lucostiguy11
| 2023-07-17T02:26:09Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"if",
"if-diffusers",
"text-to-image",
"dreambooth",
"base_model:DeepFloyd/IF-I-XL-v1.0",
"base_model:finetune:DeepFloyd/IF-I-XL-v1.0",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:IFPipeline",
"region:us"
] |
text-to-image
| 2023-07-17T01:37:40Z |
---
license: creativeml-openrail-m
base_model: DeepFloyd/IF-I-XL-v1.0
instance_prompt: A photo of sks dog in a bucket
tags:
- if
- if-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - lucostiguy11/dreambooth_if_1
This is a dreambooth model derived from DeepFloyd/IF-I-XL-v1.0. The weights were trained on A photo of sks dog in a bucket using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.




DreamBooth for the text encoder was enabled: False.
|
samiul25/ppo-LunarLander-v2
|
samiul25
| 2023-07-17T02:25:41Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T02:25:07Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.09 +/- 22.88
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
abhi-pwr/news-summarizer
|
abhi-pwr
| 2023-07-17T02:17:24Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-16T10:58:39Z |
---
{}
---
# news-summarizer
# T5 Base Model Fine-Tuned for News Article Summarization
This repository contains a fine-tuned T5 base model for news article summarization. The model has been trained to generate concise summaries of news articles given their full text.
## Model Details
- Model: T5 Base
- Fine-Tuning Task: News Article Summarization
- Training Data: Dataset of news articles with corresponding summaries
- Tokenizer: T5Tokenizer
- Maximum Input Length: 512 tokens
- Maximum Output Length: 150 tokens
- Beam Search: Enabled (with 4 beams)
- Early Stopping: Enabled
## Usage
To use the fine-tuned T5 model for news article summarization, follow the instructions below:
1. Install the required dependencies:
pip install transformers torch
2. Load the fine-tuned model:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'abhi-pwr/news-summarizer'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
3.Generate summaries:
input_text = "Enter the news article here."
inputs = tokenizer.encode(input_text, return_tensors='pt', max_length=512, truncation=True)
summary_ids = model.generate(inputs, max_length=150, num_beams=4, early_stopping=True)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
|
fnlp/moss-rlhf-policy-model-7B-en
|
fnlp
| 2023-07-17T02:13:50Z | 0 | 1 | null |
[
"llm",
"moss",
"rlhf",
"policy model",
"zh",
"arxiv:2307.04964",
"license:agpl-3.0",
"region:us"
] | null | 2023-07-14T07:05:20Z |
---
license: agpl-3.0
language:
- zh
tags:
- llm
- moss
- rlhf
- policy model
---
# MOSS-RLHF
### *MOSS-RLHF & "Secrets of RLHF in Large Language Models Part I: PPO" <br>👉 <a href="https://arxiv.org/abs/2307.04964" target="_blank">[Technical report]</a> <a href="https://openlmlab.github.io/MOSS-RLHF/" target="_blank">[Home page]*
## 🌟 News
### 👉 Wed, 12. July 2023. We have released Chinese reward model based OpenChineseLlama-7B!
[moss-rlhf-reward-model-7B-zh](https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main)
<br>
### 👉 Thu, 13. July 2023. We have released English reward model and SFT model based Llama-7B!
[moss-rlhf-reward-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-reward-model-7B-en)
[moss-rlhf-sft-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-sft-model-7B-en)
<br>
## 🧾 Open-source List
- [x] Open source code for RL training in large language models.
- [x] A 7B Chinese reward model based on openChineseLlama.
- [x] A 7B English reward model based on Llama-7B.
- [x] SFT model for English.
- [ ] Policy model for English after RLHF.
- ...
## 🌠 Introduction
Due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle.
In this technical report, we intend to help researchers to train their models stably with human feedback.
Contributions are summarized as follows:
1) We release competitive Chinese and English reward models, respectively, which have good cross-model generalization ability, alleviating the cost of relabeling human preference data;
2) We conduct in-depth analysis on the inner workings of PPO algorithm and propose the PPO-max algorithm to ensure stable model training;
3) We release the complete PPO-max codes to ensure that the LLMs in the current SFT stage can be better aligned with humans.
## 🔩 Requirements & Setup
This repository works on Python 3.8 and PyTorch 1.13.1.
We recommend using the **conda** virtual environment to run the code.
#### Step 1: Create a new Python virtual environment
```bash
conda update conda -n base -c defaults
conda create -n rlhf python=3.8
conda activate rlhf
```
#### Step 2: Install PyTorch and TensorBoard
```bash
conda install pytorch==1.13.1 pytorch-cuda=11.7 tensorboard -c pytorch -c nvidia
```
#### Step 3: Install the remaining dependencies
```bash
conda install datasets accelerate safetensors chardet cchardet -c huggingface -c conda-forge
pip3 install transformers sentencepiece einops triton==1.0.0 rouge jionlp==1.4.14 nltk sacrebleu cpm_kernels
apt install libaio-dev
DS_BUILD_OPS=1 pip install deepspeed
```
## ✨ Start training your own model!
Run code in a few steps.
### Step 1: Recover Reward model weights
We can not directly release the full weight of the reward model because of protocol restrictions.
You can merge the diff weight with original Llama-7B to recover the reward model we used.
We upload the diff models, thanks to tatsu-lab, you can recover the reward model follow these steps:
```bash
1) Download the weight diff into your local machine. The weight diff is located at:
# For English:
TODO
# For Chinese:
https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main
2) Merge the weight diff with the original Llama-7B:
# For English:
# Reward model
python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-en/diff --path_tuned ./models/moss-rlhf-reward-model-7B-en/recover --model_type reward
# SFT model
python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-sft-model-7B-en/diff --path_tuned ./models/moss-rlhf-sft-model-7B-en/recover --model_type sft
# Policy model
TODO
# For Chinese:
python merge_weight_zh.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-zh/diff --path_tuned ./models/moss-rlhf-reward-model-7B-zh/recover
```
### Step 2: Select your own SFT model.
Because of some limitations, we can not release the **Chinese** SFT model (Currently).
You can use your own SFT model, or a strong base model instead of our SFT model.
### Step 3: Start training
Run the command below.
```
# For Chinese:
# You need to use your own sft model currently.
bash run_zh.sh
# For English:
# We have loaded the sft model and reward model to huggingface.
bash run_en.sh
```
## Citation
```bibtex
@article{zheng2023secrets,
title={Secrets of RLHF in Large Language Models Part I: PPO},
author={Rui Zheng and Shihan Dou and Songyang Gao and Wei Shen and Binghai Wang and Yan Liu and Senjie Jin and Qin Liu and Limao Xiong and Lu Chen and Zhiheng Xi and Yuhao Zhou and Nuo Xu and Wenbin Lai and Minghao Zhu and Rongxiang Weng and Wensen Cheng and Cheng Chang and Zhangyue Yin and Yuan Hua and Haoran Huang and Tianxiang Sun and Hang Yan and Tao Gui and Qi Zhang and Xipeng Qiu and Xuanjing Huang},
year={2023},
eprint={2307.04964},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
dyvapandhu/vit-base-molecul-v2-5-epoch
|
dyvapandhu
| 2023-07-17T01:44:42Z | 193 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-16T10:13:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: vit-base-molecul-v2-5-epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-molecul-v2-5-epoch
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5290
- Accuracy: 0.77
- F1: 0.7698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.11.0
|
thenewcompany/poca-SoccerTwos
|
thenewcompany
| 2023-07-17T01:43:18Z | 18 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-07-14T16:02:37Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: thenewcompany/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
NasimB/all-base-guten-rarity-all-iorder-rarity-all-est-5p5k-mostf
|
NasimB
| 2023-07-17T01:29:02Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-16T23:44:56Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: all-base-guten-rarity-all-iorder-rarity-all-est-5p5k-mostf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-base-guten-rarity-all-iorder-rarity-all-est-5p5k-mostf
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7657 | 0.31 | 500 | 5.6541 |
| 5.4202 | 0.63 | 1000 | 5.2254 |
| 5.0681 | 0.94 | 1500 | 4.9792 |
| 4.7759 | 1.25 | 2000 | 4.8288 |
| 4.6402 | 1.56 | 2500 | 4.7011 |
| 4.5298 | 1.88 | 3000 | 4.5950 |
| 4.3183 | 2.19 | 3500 | 4.5365 |
| 4.2235 | 2.5 | 4000 | 4.4739 |
| 4.1818 | 2.82 | 4500 | 4.4112 |
| 4.0408 | 3.13 | 5000 | 4.3818 |
| 3.8987 | 3.44 | 5500 | 4.3582 |
| 3.8824 | 3.75 | 6000 | 4.3198 |
| 3.8108 | 4.07 | 6500 | 4.3076 |
| 3.6036 | 4.38 | 7000 | 4.3014 |
| 3.5997 | 4.69 | 7500 | 4.2881 |
| 3.5879 | 5.01 | 8000 | 4.2752 |
| 3.4104 | 5.32 | 8500 | 4.2857 |
| 3.4084 | 5.63 | 9000 | 4.2831 |
| 3.405 | 5.94 | 9500 | 4.2820 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
hansanguw/HSCho_test
|
hansanguw
| 2023-07-17T01:26:47Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T01:26:41Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e8_s6789_v3
|
KingKazma
| 2023-07-17T01:19:07Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T01:19:06Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
RajanGo/TEST-2
|
RajanGo
| 2023-07-17T01:13:17Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T01:13:11Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e6_s6789_v3
|
KingKazma
| 2023-07-17T01:05:09Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T01:05:08Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
timjwhite/poca-SoccerTwos
|
timjwhite
| 2023-07-17T00:56:31Z | 66 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-07-17T00:45:50Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: timjwhite/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Ankammarao/Telugu_to_English_Translation_Bot
|
Ankammarao
| 2023-07-17T00:55:34Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2023-07-17T00:37:06Z |
---
license: other
---
from telegram import Update
from telegram.ext import Updater, CommandHandler, MessageHandler, Filters, CallbackContext
from googletrans import Translator
BOT_TOKEN = '6064527106:AAG_cnj0EprbaEpcUXnGfqvZ7zcKkESbM-8'
def start(update: Update, _: CallbackContext):
update.message.reply_text("Welcome! I can help you translate Telugu to English. Just send me any Telugu text!")
def translate_telugu_to_english(text):
translator = Translator()
result = translator.translate(text, src='te', dest='en')
return result.text
def translate_message(update: Update, _: CallbackContext):
message = update.message.text
translation = translate_telugu_to_english(message)
update.message.reply_text(f"English Translation: {translation}")
def main():
updater = Updater(BOT_TOKEN)
dispatcher = updater.dispatcher
dispatcher.add_handler(CommandHandler("start", start))
dispatcher.add_handler(MessageHandler(Filters.text & ~Filters.command, translate_message))
updater.start_polling()
print("Bot started polling for messages...")
updater.idle()
if __name__ == "__main__":
main()
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e4_s6789_v3
|
KingKazma
| 2023-07-17T00:51:11Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:51:10Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e1_s6789_v3
|
KingKazma
| 2023-07-17T00:30:14Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:30:13Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
acasany/rare-puppers
|
acasany
| 2023-07-17T00:27:57Z | 197 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-17T00:27:47Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8876404762268066
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### husky

#### samoyed

#### shiba inu

|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e9_s6789_v3
|
KingKazma
| 2023-07-17T00:24:11Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:24:10Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e0_s6789_v3
|
KingKazma
| 2023-07-17T00:23:15Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-17T00:23:14Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.