modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-10 12:31:44
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 552
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-10 12:31:31
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
edbeeching/decision-transformer-gym-halfcheetah-medium-replay
|
edbeeching
| 2022-06-29T19:21:08Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"feature-extraction",
"deep-reinforcement-learning",
"reinforcement-learning",
"decision-transformer",
"gym-continous-control",
"arxiv:2106.01345",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2022-03-16T08:20:08Z |
---
tags:
- deep-reinforcement-learning
- reinforcement-learning
- decision-transformer
- gym-continous-control
pipeline_tag: reinforcement-learning
---
# Decision Transformer model trained on medium-replay trajectories sampled from the Gym HalfCheetah environment
This is a trained [Decision Transformer](https://arxiv.org/abs/2106.01345) model trained on medium-replay trajectories sampled from the Gym HalfCheetah environment.
The following normlization coeficients are required to use this model:
mean = [-0.12880704, 0.37381196, -0.14995988, -0.23479079, -0.28412786, -0.13096535, -0.20157982, -0.06517727, 3.4768248, -0.02785066, -0.01503525, 0.07697279, 0.01266712, 0.0273253, 0.02316425, 0.01043872, -0.01583941]
std = [0.17019016, 1.2844249, 0.33442774, 0.36727592, 0.26092398, 0.4784107, 0.31814206 ,0.33552638, 2.0931616, 0.80374336, 1.9044334, 6.57321, 7.5728636, 5.0697494, 9.105554, 6.0856543, 7.253004, 5]
See our [Blog Post](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing), [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing) or [Example Script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/decision_transformer) for usage.
|
edbeeching/decision-transformer-gym-hopper-medium-replay
|
edbeeching
| 2022-06-29T19:20:14Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"feature-extraction",
"deep-reinforcement-learning",
"reinforcement-learning",
"decision-transformer",
"gym-continous-control",
"arxiv:2106.01345",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2022-03-16T08:20:43Z |
---
tags:
- deep-reinforcement-learning
- reinforcement-learning
- decision-transformer
- gym-continous-control
pipeline_tag: reinforcement-learning
---
# Decision Transformer model trained on medium-replay trajectories sampled from the Gym Hopper environment
This is a trained [Decision Transformer](https://arxiv.org/abs/2106.01345) model trained on medium-replay trajectories sampled from the Gym Hopper environment.
The following normlization coefficients are required to use this model:
mean = [ 1.2305138, -0.04371411, -0.44542956, -0.09370098, 0.09094488, 1.3694725, -0.19992675, -0.02286135, -0.5287045, -0.14465883, -0.19652697]
std = [0.17565121, 0.06369286, 0.34383234, 0.19566889, 0.5547985, 1.0510299, 1.1583077, 0.79631287, 1.4802359, 1.6540332, 5.108601]
See our [Blog Post](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing), [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing) or [Example Script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/decision_transformer) for usage.
|
edbeeching/decision-transformer-gym-hopper-medium
|
edbeeching
| 2022-06-29T19:15:16Z | 34,485 | 6 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"feature-extraction",
"deep-reinforcement-learning",
"reinforcement-learning",
"decision-transformer",
"gym-continous-control",
"arxiv:2106.01345",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2022-03-16T08:20:31Z |
---
tags:
- deep-reinforcement-learning
- reinforcement-learning
- decision-transformer
- gym-continous-control
pipeline_tag: reinforcement-learning
---
# Decision Transformer model trained on medium trajectories sampled from the Gym Hopper environment
This is a trained [Decision Transformer](https://arxiv.org/abs/2106.01345) model trained on medium trajectories sampled from the Gym Hopper environment.
The following normlization coefficients are required to use this model:
mean = [ 1.311279, -0.08469521, -0.5382719, -0.07201576, 0.04932366, 2.1066856, -0.15017354, 0.00878345, -0.2848186, -0.18540096, -0.28461286]
std = [0.17790751, 0.05444621, 0.21297139, 0.14530419, 0.6124444, 0.85174465, 1.4515252, 0.6751696, 1.536239, 1.6160746, 5.6072536 ]
See our [Blog Post](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing), [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing) or [Example Script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/decision_transformer) for usage.
|
edbeeching/decision-transformer-gym-hopper-expert
|
edbeeching
| 2022-06-29T19:12:17Z | 566 | 18 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"feature-extraction",
"deep-reinforcement-learning",
"reinforcement-learning",
"decision-transformer",
"gym-continous-control",
"arxiv:2106.01345",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2022-03-16T08:20:20Z |
---
tags:
- deep-reinforcement-learning
- reinforcement-learning
- decision-transformer
- gym-continous-control
pipeline_tag: reinforcement-learning
---
# Decision Transformer model trained on expert trajectories sampled from the Gym Hopper environment
This is a trained [Decision Transformer](https://arxiv.org/abs/2106.01345) model trained on expert trajectories sampled from the Gym Hopper environment.
The following normlization coefficients are required to use this model:
mean = [ 1.3490015, -0.11208222, -0.5506444, -0.13188992, -0.00378754, 2.6071432, 0.02322114, -0.01626922, -0.06840388, -0.05183131, 0.04272673]
std = [0.15980862, 0.0446214, 0.14307782, 0.17629202, 0.5912333, 0.5899924, 1.5405099, 0.8152689, 2.0173461, 2.4107876, 5.8440027 ]
See our [Blog Post](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing), [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing) or [Example Script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/decision_transformer) for usage.
|
zhav1k/q-Taxi-v3
|
zhav1k
| 2022-06-29T18:56:01Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-29T18:55:53Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
nbroad/bigbird-base-health-fact
|
nbroad
| 2022-06-29T18:29:17Z | 17 | 1 |
transformers
|
[
"transformers",
"pytorch",
"big_bird",
"text-classification",
"generated_from_trainer",
"en",
"dataset:health_fact",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-26T17:55:02Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- health_fact
model-index:
- name: bigbird-base-health-fact
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: health_fact
type: health_fact
split: test
metrics:
- name: F1
type: f1
value: 0.6694031411935434
- name: Accuracy
type: accuracy
value: 0.7948094079480941
- name: False Accuracy
type: accuracy
value: 0.8092783505154639
- name: Mixture Accuracy
type: accuracy
value: 0.4975124378109453
- name: True Accuracy
type: accuracy
value: 0.9148580968280468
- name: Unproven Accuracy
type: accuracy
value: 0.4
---
# bigbird-base-health-fact
This model is a fine-tuned version of [google/bigbird-roberta-base](https://huggingface.co/google/bigbird-roberta-base) on the health_fact dataset.
It achieves the following results on the VALIDATION set:
- Overall Accuracy: 0.8228995057660626
- Macro F1: 0.6979224830442152
- False Accuracy: 0.8289473684210527
- Mixture Accuracy: 0.47560975609756095
- True Accuracy: 0.9332273449920508
- Unproven Accuracy: 0.4634146341463415
It achieves the following results on the TEST set:
- Overall Accuracy: 0.7948094079480941
- Macro F1: 0.6694031411935434
- Mixture Accuracy: 0.4975124378109453
- False Accuracy: 0.8092783505154639
- True Accuracy: 0.9148580968280468
- Unproven Accuracy: 0.4
## Model description
Here is how you can use the model:
```python
import torch
from transformers import pipeline
claim = "A mother revealed to her child in a letter after her death that she had just one eye because she had donated the other to him."
text = "In April 2005, we spotted a tearjerker on the Internet about a mother who gave up one of her eyes to a son who had lost one of his at an early age. By February 2007 the item was circulating in e-mail in the following shortened version: My mom only had one eye. I hated her… She was such an embarrassment. She cooked for students and teachers to support the family. There was this one day during elementary school where my mom came to say hello to me. I was so embarrassed. How could she do this to me? I ignored her, threw her a hateful look and ran out. The next day at school one of my classmates said, “EEEE, your mom only has one eye!” I wanted to bury myself. I also wanted my mom to just disappear. I confronted her that day and said, “If you’re only gonna make me a laughing stock, why don’t you just die?” My mom did not respond… I didn’t even stop to think for a second about what I had said, because I was full of anger. I was oblivious to her feelings. I wanted out of that house, and have nothing to do with her. So I studied real hard, got a chance to go abroad to study. Then, I got married. I bought a house of my own. I had kids of my own. I was happy with my life, my kids and the comforts. Then one day, my Mother came to visit me. She hadn’t seen me in years and she didn’t even meet her grandchildren. When she stood by the door, my children laughed at her, and I yelled at her for coming over uninvited. I screamed at her, “How dare you come to my house and scare my children! GET OUT OF HERE! NOW!! !” And to this, my mother quietly answered, “Oh, I’m so sorry. I may have gotten the wrong address,” and she disappeared out of sight. One day, a letter regarding a school reunion came to my house. So I lied to my wife that I was going on a business trip. After the reunion, I went to the old shack just out of curiosity. My neighbors said that she died. I did not shed a single tear. They handed me a letter that she had wanted me to have. My dearest son, I think of you all the time. I’m sorry that I came to your house and scared your children. I was so glad when I heard you were coming for the reunion. But I may not be able to even get out of bed to see you. I’m sorry that I was a constant embarrassment to you when you were growing up. You see……..when you were very little, you got into an accident, and lost your eye. As a mother, I couldn’t stand watching you having to grow up with one eye. So I gave you mine. I was so proud of my son who was seeing a whole new world for me, in my place, with that eye. With all my love to you, Your mother. In its earlier incarnation, the story identified by implication its location as Korea through statements made by both the mother and the son (the son’s “I left my mother and came to Seoul” and the mother’s “I won’t visit Seoul anymore”). It also supplied a reason for the son’s behavior when his mother arrived unexpectedly to visit him (“My little girl ran away, scared of my mom’s eye” and “I screamed at her, ‘How dare you come to my house and scare my daughter!'”). A further twist was provided in the original: rather than gaining the news of his mother’s death from neighbors (who hand him her letter), the son instead discovered the woman who bore him lying dead on the floor of what used to be his childhood home, her missive to him clutched in her lifeless hand: Give your parents roses while they are alive, not deadMY mom only had one eye. I hated her … she was such an embarrassment. My mom ran a small shop at a flea market. She collected little weeds and such to sell … anything for the money we needed she was such an embarrassment. There was this one day during elementary school … It was field day, and my mom came. I was so embarrassed. How could she do this to me? I threw her a hateful look and ran out. The next day at school … “your mom only has one eye?!? !” … And they taunted me. I wished that my mom would just disappear from this world so I said to my mom, “mom … Why don’t you have the other eye?! If you’re only going to make me a laughingstock, why don’t you just die?!! !” my mom did not respond … I guess I felt a little bad, but at the same time, it felt good to think that I had said what I’d wanted to say all this time… maybe it was because my mom hadn’t punished me, but I didn’t think that I had hurt her feelings very badly. That night… I woke up, and went to the kitchen to get a glass of water. My mom was crying there, so quietly, as if she was afraid that she might wake me. I took a look at her, and then turned away. Because of the thing I had said to her earlier, there was something pinching at me in the corner of my heart. Even so, I hated my mother who was crying out of her one eye. So I told myself that I would grow up and become successful. Because I hated my one-eyed mom and our desperate poverty… then I studied real hard. I left my mother and came to Seoul and studied, and got accepted in the Seoul University with all the confidence I had. Then, I got married. I bought a house of my own. Then I had kids, too… now I’m living happily as a successful man. I like it here because it’s a place that doesn’t remind me of my mom. This happiness was getting bigger and bigger, when… what?! Who’s this…it was my mother… still with her one eye. It felt as if the whole sky was falling apart on me. My little girl ran away, scared of my mom’s eye. And I asked her, “who are you? !” “I don’t know you!! !” as if trying to make that real. I screamed at her, “How dare you come to my house and scare my daughter!” “GET OUT OF HERE! NOW!! !” and to this, my mother quietly answered, “oh, I’m so sorry. I may have gotten the wrong address,” and she disappeared out of sight. Thank goodness… she doesn’t recognize me… I was quite relieved. I told myself that I wasn’t going to care, or think about this for the rest of my life. Then a wave of relief came upon me… One day, a letter regarding a school reunion came to my house. So, lying to my wife that I was going on a business trip, I went. After the reunion, I went down to the old shack, that I used to call a house… just out of curiosity there, I found my mother fallen on the cold ground. But I did not shed a single tear. She had a piece of paper in her hand…. it was a letter to me. My son… I think my life has been long enough now… And… I won’t visit Seoul anymore… but would it be too much to ask if I wanted you to come visit me once in a while? I miss you so much… and I was so glad when I heard you were coming for the reunion. But I decided not to go to the school. …for you… and I’m sorry that I only have one eye, and I was an embarrassment for you. You see, when you were very little, you got into an accident, and lost your eye. as a mom, I couldn’t stand watching you having to grow up with only one eye… so I gave you mine… I was so proud of my son that was seeing a whole new world for me, in my place, with that eye. I was never upset at you for anything you did… the couple times that you were angry with me, I thought to myself, ‘it’s because he loves me…’ my son. Oh, my son… I don’t want you to cry for me, because of my death. My son, I love you my son, I love you so much. With all modern medical technology, transplantation of the eyeball is still impossible. The optic nerve isn’t an ordinary nerve, but instead an inset running from the brain. Modern medicine isn’t able to “connect” an eyeball back to brain after an optic nerve has been severed, let alone transplant the eye from a different person. (The only exception is the cornea, the transparent part in front of the eye: corneas are transplanted to replace injured and opaque ones.) We won’t try to comment on whether any surgeon would accept an eye from a living donor for transplant into another — we’ll leave that to others who are far more knowledgeable about medical ethics and transplant procedures. But we will note that the plot device of a mother’s dramatic sacrifice for the sake of her child’s being revealed in a written communication delivered after her demise appears in another legend about maternal love: the 2008 tale about a woman who left a touching message on her cell phone even as life ebbed from her as she used her body to shield the tot during an earthquake. Giving up one’s own life for a loved one is central to a 2005 urban legend about a boy on a motorcycle who has his girlfriend hug him one last time and put on his helmet just before the crash that kills him and spares her. Returning to the “notes from the dead” theme is the 1995 story about a son who discovers only through a posthumous letter from his mother what their occasional dinner “dates” had meant to her. Another legend we’re familiar with features a meme used in the one-eyed mother story (the coming to light of the enduring love of the person who died for the completely unworthy person she’d lavished it on), but that one involves a terminally ill woman and her cheating husband. In it, an about-to-be-spurned wife begs the adulterous hoon she’d married to stick around for another 30 days and to carry her over the threshold of their home once every day of that month as her way of keeping him around long enough for her to kick the bucket and thus spare their son the knowledge that his parents were on the verge of divorce."
label = "false"
device = 0 if torch.cuda.is_available() else -1
pl = pipeline("text-classification", model="nbroad/bigbird-base-health-fact", device=device)
input_text = claim+pl.tokenizer.sep_token+text
print(len(pl.tokenizer(input_text).input_ids))
# 2303 (which is why bigbird is useful)
pl(input_text)
# [{'label': 'false', 'score': 0.3866822123527527}]
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 18
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Micro F1 | Macro F1 | False F1 | Mixture F1 | True F1 | Unproven F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:----------:|:-------:|:-----------:|
| 0.5563 | 1.0 | 1226 | 0.5020 | 0.7949 | 0.6062 | 0.7926 | 0.4591 | 0.8986 | 0.2745 |
| 0.5048 | 2.0 | 2452 | 0.4969 | 0.8180 | 0.6846 | 0.8202 | 0.4342 | 0.9126 | 0.5714 |
| 0.3454 | 3.0 | 3678 | 0.5864 | 0.8130 | 0.6874 | 0.8114 | 0.4557 | 0.9154 | 0.5672 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.1.1.dev0
- Tokenizers 0.12.1
|
JHart96/finetuning-sentiment-model-3000-samples
|
JHart96
| 2022-06-29T18:20:13Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-29T18:10:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.86
- name: F1
type: f1
value: 0.8627450980392156
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3300
- Accuracy: 0.86
- F1: 0.8627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ashraq/movielens_user_model_cos_32
|
ashraq
| 2022-06-29T18:07:51Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2022-06-24T19:16:33Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
kakashi210/autotrain-tweet-sentiment-classifier-1055036381
|
kakashi210
| 2022-06-29T17:54:00Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain",
"unk",
"dataset:kakashi210/autotrain-data-tweet-sentiment-classifier",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-29T17:45:44Z |
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- kakashi210/autotrain-data-tweet-sentiment-classifier
co2_eq_emissions: 17.43982800509071
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1055036381
- CO2 Emissions (in grams): 17.43982800509071
## Validation Metrics
- Loss: 0.6177256107330322
- Accuracy: 0.7306006137658921
- Macro F1: 0.719534854339415
- Micro F1: 0.730600613765892
- Weighted F1: 0.7302204676842725
- Macro Precision: 0.714938066281146
- Micro Precision: 0.7306006137658921
- Weighted Precision: 0.7316651970219867
- Macro Recall: 0.7258484087500343
- Micro Recall: 0.7306006137658921
- Weighted Recall: 0.7306006137658921
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/kakashi210/autotrain-tweet-sentiment-classifier-1055036381
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("kakashi210/autotrain-tweet-sentiment-classifier-1055036381", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("kakashi210/autotrain-tweet-sentiment-classifier-1055036381", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
BeardedJohn/bert-finetuned-ner-ubb-endava-only-misc
|
BeardedJohn
| 2022-06-29T16:59:54Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-29T11:44:27Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: BeardedJohn/bert-finetuned-ner-ubb-endava-only-misc
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BeardedJohn/bert-finetuned-ner-ubb-endava-only-misc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0015
- Validation Loss: 0.0006
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 705, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1740 | 0.0013 | 0 |
| 0.0024 | 0.0007 | 1 |
| 0.0015 | 0.0006 | 2 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Abonia/finetuning-sentiment-model-3000-samples
|
Abonia
| 2022-06-29T15:27:48Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-29T15:12:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.877076411960133
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2991
- Accuracy: 0.8767
- F1: 0.8771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ydshieh/clip-vit-base-patch32
|
ydshieh
| 2022-06-29T14:47:32Z | 15 | 1 |
transformers
|
[
"transformers",
"tf",
"clip",
"zero-shot-image-classification",
"summarization",
"en",
"dataset:scientific_papers",
"arxiv:2007.14062",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language: en
license: apache-2.0
datasets:
- scientific_papers
tags:
- summarization
model-index:
- name: google/bigbird-pegasus-large-pubmed
results:
- task:
type: summarization
name: Summarization
dataset:
name: scientific_papers
type: scientific_papers
config: pubmed
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 40.8966
verified: true
- name: ROUGE-2
type: rouge
value: 18.1161
verified: true
- name: ROUGE-L
type: rouge
value: 26.1743
verified: true
- name: ROUGE-LSUM
type: rouge
value: 34.2773
verified: true
- name: loss
type: loss
value: 2.1707184314727783
verified: true
- name: meteor
type: meteor
value: 0.3513
verified: true
- name: gen_len
type: gen_len
value: 221.2531
verified: true
- task:
type: summarization
name: Summarization
dataset:
name: scientific_papers
type: scientific_papers
config: arxiv
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 40.3815
verified: true
- name: ROUGE-2
type: rouge
value: 14.374
verified: true
- name: ROUGE-L
type: rouge
value: 23.4773
verified: true
- name: ROUGE-LSUM
type: rouge
value: 33.772
verified: true
- name: loss
type: loss
value: 3.235051393508911
verified: true
- name: gen_len
type: gen_len
value: 186.2003
verified: true
---
# BigBirdPegasus model (large)
BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle.
BigBird was introduced in this [paper](https://arxiv.org/abs/2007.14062) and first released in this [repository](https://github.com/google-research/bigbird).
Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts.
## How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BigBirdPegasusForConditionalGeneration, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-pubmed")
# by default encoder-attention is `block_sparse` with num_random_blocks=3, block_size=64
model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-pubmed")
# decoder attention type can't be changed & will be "original_full"
# you can change `attention_type` (encoder only) to full attention like this:
model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-pubmed", attention_type="original_full")
# you can change `block_size` & `num_random_blocks` like this:
model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-pubmed", block_size=16, num_random_blocks=2)
text = "Replace me by any text you'd like."
inputs = tokenizer(text, return_tensors='pt')
prediction = model.generate(**inputs)
prediction = tokenizer.batch_decode(prediction)
```
## Training Procedure
This checkpoint is obtained after fine-tuning `BigBirdPegasusForConditionalGeneration` for **summarization** on **pubmed dataset** from [scientific_papers](https://huggingface.co/datasets/scientific_papers).
## BibTeX entry and citation info
```tex
@misc{zaheer2021big,
title={Big Bird: Transformers for Longer Sequences},
author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed},
year={2021},
eprint={2007.14062},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
FabianWillner/bert-base-uncased-finetuned-squad
|
FabianWillner
| 2022-06-29T14:46:28Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-29T09:16:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-squad
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0106
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0626 | 1.0 | 5533 | 1.0308 |
| 0.8157 | 2.0 | 11066 | 1.0106 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
freedomking/prompt-uie-base
|
freedomking
| 2022-06-29T14:46:15Z | 4 | 5 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-06-29T14:28:56Z |
## Introduction
Universal Information Extraction
More detail:
https://github.com/PaddlePaddle/PaddleNLP/tree/develop/model_zoo/uie
|
igpaub/q-FrozenLake-v1-4x4
|
igpaub
| 2022-06-29T14:29:26Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-29T13:12:43Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4
results:
- metrics:
- type: mean_reward
value: 0.78 +/- 0.41
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="igpaub/q-FrozenLake-v1-4x4", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Salvatore/bert-finetuned-mutation-recognition-1
|
Salvatore
| 2022-06-29T13:59:03Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-29T09:40:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-mutation-recognition-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-mutation-recognition-1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0380
- Proteinmutation F1: 0.8631
- Dnamutation F1: 0.7522
- Snp F1: 1.0
- Precision: 0.8061
- Recall: 0.8386
- F1: 0.8221
- Accuracy: 0.9942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Proteinmutation F1 | Dnamutation F1 | Snp F1 | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------:|:--------------:|:------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 259 | 0.0273 | 0.8072 | 0.5762 | 0.975 | 0.6685 | 0.7580 | 0.7104 | 0.9924 |
| 0.0597 | 2.0 | 518 | 0.0260 | 0.8148 | 0.6864 | 0.9873 | 0.7363 | 0.8004 | 0.7670 | 0.9936 |
| 0.0597 | 3.0 | 777 | 0.0338 | 0.8252 | 0.7221 | 1.0 | 0.7857 | 0.7941 | 0.7899 | 0.9935 |
| 0.0046 | 4.0 | 1036 | 0.0299 | 0.8707 | 0.7214 | 0.9873 | 0.7773 | 0.8450 | 0.8098 | 0.9941 |
| 0.0046 | 5.0 | 1295 | 0.0353 | 0.9035 | 0.7364 | 0.9873 | 0.8130 | 0.8493 | 0.8307 | 0.9941 |
| 0.0014 | 6.0 | 1554 | 0.0361 | 0.8941 | 0.7391 | 0.9873 | 0.8093 | 0.8471 | 0.8278 | 0.9941 |
| 0.0014 | 7.0 | 1813 | 0.0367 | 0.8957 | 0.7249 | 1.0 | 0.8090 | 0.8365 | 0.8225 | 0.9940 |
| 0.0004 | 8.0 | 2072 | 0.0381 | 0.8714 | 0.7578 | 1.0 | 0.8266 | 0.8301 | 0.8284 | 0.9940 |
| 0.0004 | 9.0 | 2331 | 0.0380 | 0.8732 | 0.7550 | 1.0 | 0.8148 | 0.8408 | 0.8276 | 0.9942 |
| 0.0002 | 10.0 | 2590 | 0.0380 | 0.8631 | 0.7522 | 1.0 | 0.8061 | 0.8386 | 0.8221 | 0.9942 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2
- Datasets 2.0.0
- Tokenizers 0.12.1
|
robingeibel/bigbird-base-finetuned-big_patent
|
robingeibel
| 2022-06-29T12:35:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"big_bird",
"fill-mask",
"generated_from_trainer",
"dataset:big_patent",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-27T07:03:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- big_patent
model-index:
- name: bigbird-base-finetuned-big_patent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bigbird-base-finetuned-big_patent
This model is a fine-tuned version of [robingeibel/bigbird-base-finetuned-big_patent](https://huggingface.co/robingeibel/bigbird-base-finetuned-big_patent) on the big_patent dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0686
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 1.1432 | 1.0 | 154482 | 1.0686 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
igpaub/q-FrozenLake-v1-4x4-noSlippery
|
igpaub
| 2022-06-29T12:17:50Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-29T12:17:41Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="igpaub/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
gguichard/q-Taxi-v3
|
gguichard
| 2022-06-29T09:26:50Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-29T09:26:45Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.48 +/- 2.77
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="gguichard/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
squirro/distilroberta-base-squad_v2
|
squirro
| 2022-06-29T08:53:58Z | 15 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"onnx",
"roberta",
"question-answering",
"generated_from_trainer",
"en",
"dataset:squad_v2",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-07T10:00:04Z |
---
license: apache-2.0
language: en
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilroberta-base-squad_v2
results:
- task:
name: Question Answering
type: question-answering
dataset:
type: squad_v2 # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: The Stanford Question Answering Dataset
args: en
metrics:
- type: eval_exact
value: 65.2405
- type: eval_f1
value: 68.6265
- type: eval_HasAns_exact
value: 67.5776
- type: eval_HasAns_f1
value: 74.3594
- type: eval_NoAns_exact
value: 62.91
- type: eval_NoAns_f1
value: 62.91
---
# distilroberta-base-squad_v2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the squad_v2 dataset.
## Model description
This model is fine-tuned on the extractive question answering task -- The Stanford Question Answering Dataset -- [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/).
For convenience this model is prepared to be used with the frameworks `PyTorch`, `Tensorflow` and `ONNX`.
## Intended uses & limitations
This model can handle mismatched question-context pairs. Make sure to specify `handle_impossible_answer=True` when using `QuestionAnsweringPipeline`.
__Example usage:__
```python
>>> from transformers import AutoModelForQuestionAnswering, AutoTokenizer, QuestionAnsweringPipeline
>>> model = AutoModelForQuestionAnswering.from_pretrained("squirro/distilroberta-base-squad_v2")
>>> tokenizer = AutoTokenizer.from_pretrained("squirro/distilroberta-base-squad_v2")
>>> qa_model = QuestionAnsweringPipeline(model, tokenizer)
>>> qa_model(
>>> question="What's your name?",
>>> context="My name is Clara and I live in Berkeley.",
>>> handle_impossible_answer=True # important!
>>> )
{'score': 0.9498472809791565, 'start': 11, 'end': 16, 'answer': 'Clara'}
```
## Training and evaluation data
Training and evaluation was done on [SQuAD2.0](https://huggingface.co/datasets/squad_v2).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- num_devices: 8
- total_train_batch_size: 512
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Metric | Value |
|:-------------------------|-------------:|
| epoch | 3 |
| eval_HasAns_exact | 67.5776 |
| eval_HasAns_f1 | 74.3594 |
| eval_HasAns_total | 5928 |
| eval_NoAns_exact | 62.91 |
| eval_NoAns_f1 | 62.91 |
| eval_NoAns_total | 5945 |
| eval_best_exact | 65.2489 |
| eval_best_exact_thresh | 0 |
| eval_best_f1 | 68.6349 |
| eval_best_f1_thresh | 0 |
| eval_exact | 65.2405 |
| eval_f1 | 68.6265 |
| eval_samples | 12165 |
| eval_total | 11873 |
| train_loss | 1.40336 |
| train_runtime | 1365.28 |
| train_samples | 131823 |
| train_samples_per_second | 289.662 |
| train_steps_per_second | 0.567 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
---
# About Us
<img src="https://squirro.com/wp-content/themes/squirro/img/squirro_logo.svg" alt="Squirro Logo" width="250"/>
Squirro marries data from any source with your intent, and your context to intelligently augment decision-making - right when you need it!
An Insight Engine at its core, Squirro works with global organizations, primarily in financial services, public sector, professional services, and manufacturing, among others. Customers include Bank of England, European Central Bank (ECB), Deutsche Bundesbank, Standard Chartered, Henkel, Armacell, Candriam, and many other world-leading firms.
Founded in 2012, Squirro is currently present in Zürich, London, New York, and Singapore. Further information about AI-driven business insights can be found at http://squirro.com.
## Social media profiles:
- Redefining AI Podcast (Spotify): https://open.spotify.com/show/6NPLcv9EyaD2DcNT8v89Kb
- Redefining AI Podcast (Apple Podcasts): https://podcasts.apple.com/us/podcast/redefining-ai/id1613934397
- Squirro LinkedIn: https://www.linkedin.com/company/squirroag
- Squirro Academy LinkedIn: https://www.linkedin.com/showcase/the-squirro-academy
- Twitter: https://twitter.com/Squirro
- Facebook: https://www.facebook.com/squirro
- Instagram: https://www.instagram.com/squirro/
|
RuiqianLi/wav2vec2-large-960h-lv60-self-4-gram_fine-tune_real_29_Jun
|
RuiqianLi
| 2022-06-29T08:44:53Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:uob_singlish",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-29T04:45:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- uob_singlish
model-index:
- name: wav2vec2-large-960h-lv60-self-4-gram_fine-tune_real_29_Jun
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-960h-lv60-self-4-gram_fine-tune_real_29_Jun
This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the uob_singlish dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2895
- Wer: 0.4583
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.1283 | 1.82 | 20 | 1.5236 | 0.5764 |
| 1.3015 | 3.64 | 40 | 1.2956 | 0.4931 |
| 0.9918 | 5.45 | 60 | 1.3087 | 0.5347 |
| 0.849 | 7.27 | 80 | 1.2914 | 0.5139 |
| 0.6191 | 9.09 | 100 | 1.2895 | 0.4583 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
cwkeam/m-ctc-t-large-lid
|
cwkeam
| 2022-06-29T08:11:14Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mctct",
"speech",
"en",
"dataset:librispeech_asr",
"dataset:common_voice",
"arxiv:2111.00161",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-06-29T08:08:36Z |
---
language: en
datasets:
- librispeech_asr
- common_voice
tags:
- speech
license: apache-2.0
---
# M-CTC-T
Massively multilingual speech recognizer from Meta AI. The model is a 1B-param transformer encoder, with a CTC head over 8065 character labels and a language identification head over 60 language ID labels. It is trained on Common Voice (version 6.1, December 2020 release) and VoxPopuli. After training on Common Voice and VoxPopuli, the model is trained on Common Voice only. The labels are unnormalized character-level transcripts (punctuation and capitalization are not removed). The model takes as input Mel filterbank features from a 16Khz audio signal.

The original Flashlight code, model checkpoints, and Colab notebook can be found at https://github.com/flashlight/wav2letter/tree/main/recipes/mling_pl .
## Citation
[Paper](https://arxiv.org/abs/2111.00161)
Authors: Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, Ronan Collobert
```
@article{lugosch2021pseudo,
title={Pseudo-Labeling for Massively Multilingual Speech Recognition},
author={Lugosch, Loren and Likhomanenko, Tatiana and Synnaeve, Gabriel and Collobert, Ronan},
journal={ICASSP},
year={2022}
}
```
Additional thanks to [Chan Woo Kim](https://huggingface.co/cwkeam) and [Patrick von Platen](https://huggingface.co/patrickvonplaten) for porting the model from Flashlight to PyTorch.
# Training method
 TO-DO: replace with the training diagram from paper
For more information on how the model was trained, please take a look at the [official paper](https://arxiv.org/abs/2111.00161).
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import MCTCTForCTC, MCTCTProcessor
model = MCTCTForCTC.from_pretrained("speechbrain/mctct-large")
processor = MCTCTProcessor.from_pretrained("speechbrain/mctct-large")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_features = processor(ds[0]["audio"]["array"], return_tensors="pt").input_features
# retrieve logits
logits = model(input_features).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
Results for Common Voice, averaged over all languages:
*Character error rate (CER)*:
| Valid | Test |
|-------|------|
| 21.4 | 23.3 |
|
prithivida/bert-for-patents-64d
|
prithivida
| 2022-06-29T07:47:23Z | 41 | 8 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"bert",
"feature-extraction",
"masked-lm",
"en",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-31T06:40:35Z |
---
language:
- en
tags:
- masked-lm
- pytorch
pipeline-tag: "fill-mask"
mask-token: "[MASK]"
widget:
- text: "The present [MASK] provides a torque sensor that is small and highly rigid and for which high production efficiency is possible."
- text: "The present invention relates to [MASK] accessories and pertains particularly to a brake light unit for bicycles."
- text: "The present invention discloses a space-bound-free [MASK] and its coordinate determining circuit for determining a coordinate of a stylus pen."
- text: "The illuminated [MASK] includes a substantially translucent canopy supported by a plurality of ribs pivotally swingable towards and away from a shaft."
license: apache-2.0
metrics:
- perplexity
---
# Motivation
This model is based on anferico/bert-for-patents - a BERT<sub>LARGE</sub> model (See next section for details below). By default, the pre-trained model's output embeddings with size 768 (base-models) or with size 1024 (large-models). However, when you store Millions of embeddings, this can require quite a lot of memory/storage. So have reduced the embedding dimension to 64 i.e 1/16th of 1024 using Principle Component Analysis (PCA) and it still gives a comparable performance. Yes! PCA gives better performance than NMF. Note: This process neither improves the runtime, nor the memory requirement for running the model. It only reduces the needed space to store embeddings, for example, for semantic search using vector databases.
# BERT for Patents
BERT for Patents is a model trained by Google on 100M+ patents (not just US patents).
If you want to learn more about the model, check out the [blog post](https://cloud.google.com/blog/products/ai-machine-learning/how-ai-improves-patent-analysis), [white paper](https://services.google.com/fh/files/blogs/bert_for_patents_white_paper.pdf) and [GitHub page](https://github.com/google/patents-public-data/blob/master/models/BERT%20for%20Patents.md) containing the original TensorFlow checkpoint.
---
### Projects using this model (or variants of it):
- [Patents4IPPC](https://github.com/ec-jrc/Patents4IPPC) (carried out by [Pi School](https://picampus-school.com/) and commissioned by the [Joint Research Centre (JRC)](https://ec.europa.eu/jrc/en) of the European Commission)
|
coolzhao/xlm-roberta-base-finetuned-panx-de
|
coolzhao
| 2022-06-29T07:14:20Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-29T07:01:12Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8600306626540231
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1356
- F1: 0.8600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2525 | 1.0 | 525 | 0.1673 | 0.8294 |
| 0.1298 | 2.0 | 1050 | 0.1381 | 0.8510 |
| 0.0839 | 3.0 | 1575 | 0.1356 | 0.8600 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
hellennamulinda/agric-eng-lug
|
hellennamulinda
| 2022-06-29T06:40:17Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain",
"unk",
"dataset:hellennamulinda/autotrain-data-agric-eng-lug",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-23T13:50:37Z |
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- hellennamulinda/autotrain-data-agric-eng-lug
co2_eq_emissions: 0.04087910671538076
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 1026034854
- CO2 Emissions (in grams): 0.04087910671538076
## Validation Metrics
- Loss: 1.0871405601501465
- Rouge1: 55.8225
- Rouge2: 34.1547
- RougeL: 54.4274
- RougeLsum: 54.408
- Gen Len: 23.178
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/hellennamulinda/autotrain-agric-eng-lug-1026034854
```
|
iiShreya/q-FrozenLake-v1-4x4-noSlippery
|
iiShreya
| 2022-06-29T05:28:15Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-29T05:28:08Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
RodrigoGuerra/bert-base-spanish-wwm-uncased-finetuned-clinical
|
RodrigoGuerra
| 2022-06-29T05:26:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-29T04:04:21Z |
---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-base-spanish-wwm-uncased-finetuned-clinical
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-uncased-finetuned-clinical
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7962
- F1: 0.1081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:------:|:---------------:|:------:|
| 1.1202 | 1.0 | 2007 | 1.0018 | 0.0062 |
| 1.0153 | 2.0 | 4014 | 0.9376 | 0.0166 |
| 0.9779 | 3.0 | 6021 | 0.9026 | 0.0342 |
| 0.9598 | 4.0 | 8028 | 0.8879 | 0.0337 |
| 0.9454 | 5.0 | 10035 | 0.8699 | 0.0598 |
| 0.9334 | 6.0 | 12042 | 0.8546 | 0.0682 |
| 0.9263 | 7.0 | 14049 | 0.8533 | 0.0551 |
| 0.9279 | 8.0 | 16056 | 0.8538 | 0.0715 |
| 0.9184 | 9.0 | 18063 | 0.8512 | 0.0652 |
| 0.9151 | 10.0 | 20070 | 0.8313 | 0.0789 |
| 0.9092 | 11.0 | 22077 | 0.8299 | 0.0838 |
| 0.9083 | 12.0 | 24084 | 0.8331 | 0.0718 |
| 0.9057 | 13.0 | 26091 | 0.8319 | 0.0719 |
| 0.9018 | 14.0 | 28098 | 0.8133 | 0.0969 |
| 0.9068 | 15.0 | 30105 | 0.8234 | 0.0816 |
| 0.9034 | 16.0 | 32112 | 0.8151 | 0.0899 |
| 0.9008 | 17.0 | 34119 | 0.8145 | 0.0967 |
| 0.8977 | 18.0 | 36126 | 0.8168 | 0.0891 |
| 0.898 | 19.0 | 38133 | 0.8167 | 0.0818 |
| 0.8956 | 20.0 | 40140 | 0.8076 | 0.1030 |
| 0.8983 | 21.0 | 42147 | 0.8129 | 0.0867 |
| 0.896 | 22.0 | 44154 | 0.8118 | 0.0892 |
| 0.8962 | 23.0 | 46161 | 0.8066 | 0.1017 |
| 0.8917 | 24.0 | 48168 | 0.8154 | 0.0908 |
| 0.8923 | 25.0 | 50175 | 0.8154 | 0.0897 |
| 0.8976 | 26.0 | 52182 | 0.8089 | 0.0910 |
| 0.8926 | 27.0 | 54189 | 0.8069 | 0.0947 |
| 0.8911 | 28.0 | 56196 | 0.8170 | 0.0882 |
| 0.8901 | 29.0 | 58203 | 0.7991 | 0.1112 |
| 0.8934 | 30.0 | 60210 | 0.7996 | 0.1112 |
| 0.8903 | 31.0 | 62217 | 0.8049 | 0.0950 |
| 0.8924 | 32.0 | 64224 | 0.8116 | 0.0951 |
| 0.8887 | 33.0 | 66231 | 0.7982 | 0.1075 |
| 0.8922 | 34.0 | 68238 | 0.8013 | 0.1025 |
| 0.8871 | 35.0 | 70245 | 0.8064 | 0.0979 |
| 0.8913 | 36.0 | 72252 | 0.8108 | 0.0909 |
| 0.8924 | 37.0 | 74259 | 0.8081 | 0.0889 |
| 0.8848 | 38.0 | 76266 | 0.7923 | 0.1228 |
| 0.8892 | 39.0 | 78273 | 0.8025 | 0.0959 |
| 0.8886 | 40.0 | 80280 | 0.7954 | 0.1148 |
| 0.8938 | 41.0 | 82287 | 0.8017 | 0.1058 |
| 0.8897 | 42.0 | 84294 | 0.7946 | 0.1146 |
| 0.8906 | 43.0 | 86301 | 0.7983 | 0.1102 |
| 0.889 | 44.0 | 88308 | 0.8068 | 0.0950 |
| 0.8872 | 45.0 | 90315 | 0.7999 | 0.1089 |
| 0.8902 | 46.0 | 92322 | 0.7992 | 0.0999 |
| 0.8912 | 47.0 | 94329 | 0.7981 | 0.1048 |
| 0.886 | 48.0 | 96336 | 0.8024 | 0.0991 |
| 0.8848 | 49.0 | 98343 | 0.8026 | 0.0984 |
| 0.8866 | 50.0 | 100350 | 0.7965 | 0.1135 |
| 0.8848 | 51.0 | 102357 | 0.8054 | 0.0926 |
| 0.8863 | 52.0 | 104364 | 0.8068 | 0.0917 |
| 0.8866 | 53.0 | 106371 | 0.7993 | 0.0964 |
| 0.8823 | 54.0 | 108378 | 0.7929 | 0.1126 |
| 0.8911 | 55.0 | 110385 | 0.7938 | 0.1132 |
| 0.8911 | 56.0 | 112392 | 0.7932 | 0.1144 |
| 0.8866 | 57.0 | 114399 | 0.8018 | 0.0957 |
| 0.8841 | 58.0 | 116406 | 0.7976 | 0.1015 |
| 0.8874 | 59.0 | 118413 | 0.8035 | 0.0966 |
| 0.887 | 60.0 | 120420 | 0.7954 | 0.1112 |
| 0.888 | 61.0 | 122427 | 0.7927 | 0.1164 |
| 0.8845 | 62.0 | 124434 | 0.7982 | 0.1012 |
| 0.8848 | 63.0 | 126441 | 0.7978 | 0.1034 |
| 0.8857 | 64.0 | 128448 | 0.8036 | 0.0969 |
| 0.8827 | 65.0 | 130455 | 0.7958 | 0.1036 |
| 0.8878 | 66.0 | 132462 | 0.7983 | 0.1030 |
| 0.885 | 67.0 | 134469 | 0.7956 | 0.1055 |
| 0.8859 | 68.0 | 136476 | 0.7964 | 0.1058 |
| 0.8872 | 69.0 | 138483 | 0.7989 | 0.1005 |
| 0.8841 | 70.0 | 140490 | 0.7949 | 0.1138 |
| 0.8846 | 71.0 | 142497 | 0.7960 | 0.1062 |
| 0.8867 | 72.0 | 144504 | 0.7965 | 0.1058 |
| 0.8856 | 73.0 | 146511 | 0.7980 | 0.1007 |
| 0.8852 | 74.0 | 148518 | 0.7971 | 0.1012 |
| 0.8841 | 75.0 | 150525 | 0.7975 | 0.1049 |
| 0.8865 | 76.0 | 152532 | 0.7981 | 0.1010 |
| 0.8887 | 77.0 | 154539 | 0.7945 | 0.1095 |
| 0.8853 | 78.0 | 156546 | 0.7965 | 0.1053 |
| 0.8843 | 79.0 | 158553 | 0.7966 | 0.1062 |
| 0.8858 | 80.0 | 160560 | 0.7962 | 0.1081 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.9.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
domenicrosati/deberta-mlm-test
|
domenicrosati
| 2022-06-29T05:17:09Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-28T23:53:45Z |
---
license: mit
tags:
- fill-mask
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-mlm-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-mlm-test
This model is a fine-tuned version of [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2792
- Accuracy: 0.4766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 4.4466 | 1.0 | 2067 | 4.1217 | 0.3847 |
| 3.9191 | 2.0 | 4134 | 3.6562 | 0.4298 |
| 3.6397 | 3.0 | 6201 | 3.4417 | 0.4550 |
| 3.522 | 4.0 | 8268 | 3.3239 | 0.4692 |
| 3.4504 | 5.0 | 10335 | 3.2792 | 0.4766 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0a0+17540c5
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Abhinandan/Atari
|
Abhinandan
| 2022-06-29T04:59:38Z | 5 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-29T04:38:24Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 14.50 +/- 12.34
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Abhinandan -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Abhinandan
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Shikenrua/distilbert-base-uncased-finetuned-emotion
|
Shikenrua
| 2022-06-29T04:46:53Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-17T05:16:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
cwkeam/m-ctc-t-large-sequence-lid
|
cwkeam
| 2022-06-29T04:31:03Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mctct",
"text-classification",
"speech",
"en",
"dataset:librispeech_asr",
"dataset:common_voice",
"arxiv:2111.00161",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-23T15:10:55Z |
---
language: en
datasets:
- librispeech_asr
- common_voice
tags:
- speech
license: apache-2.0
---
# M-CTC-T
Massively multilingual speech recognizer from Meta AI. The model is a 1B-param transformer encoder, with a CTC head over 8065 character labels and a language identification head over 60 language ID labels. It is trained on Common Voice (version 6.1, December 2020 release) and VoxPopuli. After training on Common Voice and VoxPopuli, the model is trained on Common Voice only. The labels are unnormalized character-level transcripts (punctuation and capitalization are not removed). The model takes as input Mel filterbank features from a 16Khz audio signal.

The original Flashlight code, model checkpoints, and Colab notebook can be found at https://github.com/flashlight/wav2letter/tree/main/recipes/mling_pl .
## Citation
[Paper](https://arxiv.org/abs/2111.00161)
Authors: Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, Ronan Collobert
```
@article{lugosch2021pseudo,
title={Pseudo-Labeling for Massively Multilingual Speech Recognition},
author={Lugosch, Loren and Likhomanenko, Tatiana and Synnaeve, Gabriel and Collobert, Ronan},
journal={ICASSP},
year={2022}
}
```
Additional thanks to [Chan Woo Kim](https://huggingface.co/cwkeam) and [Patrick von Platen](https://huggingface.co/patrickvonplaten) for porting the model from Flashlight to PyTorch.
# Training method
 TO-DO: replace with the training diagram from paper
For more information on how the model was trained, please take a look at the [official paper](https://arxiv.org/abs/2111.00161).
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import MCTCTForCTC, MCTCTProcessor
model = MCTCTForCTC.from_pretrained("speechbrain/mctct-large")
processor = MCTCTProcessor.from_pretrained("speechbrain/mctct-large")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_features = processor(ds[0]["audio"]["array"], return_tensors="pt").input_features
# retrieve logits
logits = model(input_features).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
Results for Common Voice, averaged over all languages:
*Character error rate (CER)*:
| Valid | Test |
|-------|------|
| 21.4 | 23.3 |
|
KyanChen/BuildingExtraction
|
KyanChen
| 2022-06-29T02:13:33Z | 0 | 1 | null |
[
"region:us"
] | null | 2022-06-29T01:34:01Z |
# STTNet
Paper: Building Extraction from Remote Sensing Images with Sparse Token Transformers
1. Prepare Data
Prepare data for training, validation, and test phase. All images are with the resolution of $512 \times 512$. Please refer to the directory of **Data**.
For larger images, you can patch the images with labels using **Tools/CutImgSegWithLabel.py**.
2. Get Data List
Please refer to **Tools/GetTrainValTestCSV.py** to get the train, val, and test csv files.
3. Get Imgs Infos
Please refer to **Tools/GetImgMeanStd.py** to get the mean value and standard deviation of the all image pixels in training set.
4. Modify Model Infos
Please modify the model information if you want, or keep the default configuration.
5. Run to Train
Train the model in **Main.py**.
6. [Optional] Run to Test
Test the model with checkpoint in **Test.py**.
We have provided pretrained models on INRIA and WHU Datasets. The pt models are in folder **Pretrain**.
If you have any questions, please refer to [our paper](https://www.mdpi.com/2072-4292/13/21/4441) or contact with us by email.
```
@Article{rs13214441,
AUTHOR = {Chen, Keyan and Zou, Zhengxia and Shi, Zhenwei},
TITLE = {Building Extraction from Remote Sensing Images with Sparse Token Transformers},
JOURNAL = {Remote Sensing},
VOLUME = {13},
YEAR = {2021},
NUMBER = {21},
ARTICLE-NUMBER = {4441},
URL = {https://www.mdpi.com/2072-4292/13/21/4441},
ISSN = {2072-4292},
DOI = {10.3390/rs13214441}
}
```
|
gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-5gram-v3
|
gary109
| 2022-06-29T01:22:31Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-28T14:58:21Z |
---
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-5gram-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-5gram-v3
This model is a fine-tuned version of [gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-v2](https://huggingface.co/gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-v2) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5265
- Wer: 0.2256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2546 | 1.0 | 280 | 0.6004 | 0.2796 |
| 0.2325 | 2.0 | 560 | 0.6337 | 0.2729 |
| 0.2185 | 3.0 | 840 | 0.5546 | 0.2299 |
| 0.1988 | 4.0 | 1120 | 0.5265 | 0.2256 |
| 0.1755 | 5.0 | 1400 | 0.5577 | 0.2212 |
| 0.1474 | 6.0 | 1680 | 0.6353 | 0.2241 |
| 0.1498 | 7.0 | 1960 | 0.5758 | 0.2086 |
| 0.1252 | 8.0 | 2240 | 0.5738 | 0.2052 |
| 0.1174 | 9.0 | 2520 | 0.5994 | 0.2048 |
| 0.1035 | 10.0 | 2800 | 0.5988 | 0.2038 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-5gram-v4-1
|
gary109
| 2022-06-29T01:00:45Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-28T05:51:25Z |
---
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-5gram-v4-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-5gram-v4-1
This model is a fine-tuned version of [gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v4](https://huggingface.co/gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v4) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2143
- Wer: 0.1211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2609 | 1.0 | 280 | 0.2313 | 0.1376 |
| 0.2297 | 2.0 | 560 | 0.2240 | 0.1397 |
| 0.1951 | 3.0 | 840 | 0.2280 | 0.1361 |
| 0.1816 | 4.0 | 1120 | 0.2215 | 0.1282 |
| 0.1634 | 5.0 | 1400 | 0.2180 | 0.1240 |
| 0.1338 | 6.0 | 1680 | 0.2226 | 0.1241 |
| 0.1411 | 7.0 | 1960 | 0.2143 | 0.1211 |
| 0.1143 | 8.0 | 2240 | 0.2181 | 0.1174 |
| 0.1127 | 9.0 | 2520 | 0.2215 | 0.1167 |
| 0.105 | 10.0 | 2800 | 0.2196 | 0.1160 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
workRL/q-Taxi-v3
|
workRL
| 2022-06-28T23:49:57Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-28T23:49:51Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Aalaa/opt-125m-wikitext2
|
Aalaa
| 2022-06-28T22:39:40Z | 53 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"opt",
"text-generation",
"generated_from_trainer",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-28T21:52:26Z |
---
license: other
tags:
- generated_from_trainer
model-index:
- name: opt-125m-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-125m-wikitext2
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.4123 | 1.0 | 2370 | 3.3621 |
| 3.2096 | 2.0 | 4740 | 3.3452 |
| 3.0822 | 3.0 | 7110 | 3.3409 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
13hannes11/master_thesis_models
|
13hannes11
| 2022-06-28T21:14:01Z | 0 | 0 | null |
[
"tensorboard",
"focus-prediction",
"microscopy",
"pytorch",
"license:mit",
"region:us"
] | null | 2022-03-08T16:31:24Z |
---
name: "K-POP"
license: "mit"
metrics:
- MAE
- PLCC
- SRCC
- R2
tags:
- focus-prediction
- microscopy
- pytorch
---
# K-POP: Predicting Distance to Focal Plane for Kato-Katz Prepared Microscopy Slides Using Deep Learning
<a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch&logoColor=white"></a><a href="https://pytorchlightning.ai/">
<img alt="Lightning" src="https://img.shields.io/badge/-Lightning-792ee5?logo=pytorchlightning&logoColor=white"></a>
<a href="https://hydra.cc/"><img alt="Config: Hydra" src="https://img.shields.io/badge/Config-Hydra-89b8cd"></a>
## Description
This repository contains the models and training pipeline for my master thesis. The main repository is hosted on [GitHub](https://github.com/13hannes11/master_thesis_code).
The project structure is based on the template by [ashleve](https://github.com/ashleve/lightning-hydra-template).
The metadata is stored in `data/focus150/`. The relevant files are `test_metadata.csv`, `train_metadata.csv` and `validation_metadata.csv`. Image data (of 150 x 150 px images) is not published together with this repository therefore training runs are not possible to do without it. The layout of the metadata files is as follows
```csv
,image_path,scan_uuid,study_id,focus_height,original_filename,stack_id,obj_name
0,31/b0d4005e-57d0-4516-a239-abe02a8d0a67/I02413_X009_Y014_Z5107_750_300.jpg,b0d4005e-57d0-4516-a239-abe02a8d0a67,31,-0.013672000000000017,I02413_X009_Y014_Z5107.jpg,1811661,schistosoma
1,31/274d8969-aa7c-4ac0-be60-e753579393ad/I01981_X019_Y014_Z4931_450_0.jpg,274d8969-aa7c-4ac0-be60-e753579393ad,31,-0.029296999999999962,I01981_X019_Y014_Z4931.jpg,1661371,schistosoma
...
```
## How to run
Train model with chosen experiment configuration from `configs/experiment/`
```bash
python train.py experiment=focusResNet_150
```
Train with hyperparameter search from `configs/hparams_search/`
```bash
python train.py -m hparams_search=focusResNetMSE_150
```
You can override any parameter from command line like this
```bash
python train.py trainer.max_epochs=20 datamodule.batch_size=64
```
## Jupyter notebooks
Figures and other evaluation code was run in Jupyter notebooks. These are available at `notebooks/`
|
syndi-models/article-title-generator
|
syndi-models
| 2022-06-28T20:08:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-09T18:49:29Z |
---
license: mit
---
## Article Title Generator
The model is based on the T5 language model and trained using a large collection of Medium articles.
## Usage
Example code:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("czearing/article-title-generator")
model = AutoModel.from_pretrained("czearing/article-title-generator")
```
## License
MIT
|
czearing/article-title-generator
|
czearing
| 2022-06-28T20:08:16Z | 1,175 | 21 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-28T19:44:19Z |
---
license: mit
---
## Article Title Generator
The model is based on the T5 language model and trained using a large collection of Medium articles.
## Usage
Example code:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("czearing/article-title-generator")
model = AutoModel.from_pretrained("czearing/article-title-generator")
```
## License
MIT
|
YushiUeda/callhome_adapt_simu
|
YushiUeda
| 2022-06-28T19:33:39Z | 3 | 0 |
espnet
|
[
"espnet",
"audio",
"diarization",
"dataset:callhome",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null | 2022-06-28T19:32:41Z |
---
tags:
- espnet
- audio
- diarization
language: noinfo
datasets:
- callhome
license: cc-by-4.0
---
## ESPnet2 DIAR model
### `YushiUeda/callhome_adapt_simu`
This model was trained by YushiUeda using callhome recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 0cabe65afd362122e77b04e2e967986a91de0fd8
pip install -e .
cd egs2/callhome/diar1
./run.sh --skip_data_prep false --skip_train true --download_model YushiUeda/callhome_adapt_simu
```
## DIAR config
<details><summary>expand</summary>
```
config: conf/tuning/train_diar_eda_adapt.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/diar_train_diar_eda_adapt_simu
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 43777
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
- - train
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param:
- exp/diar_train_diar_eda_5_raw/latest.pth
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 16
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/diar_stats_8k/train/speech_shape
- exp/diar_stats_8k/train/spk_labels_shape
valid_shape_file:
- exp/diar_stats_8k/valid/speech_shape
- exp/diar_stats_8k/valid/spk_labels_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/simu/data/swb_sre_tr_ns1n2n3n4_beta2n2n5n9_100000/wav.scp
- speech
- sound
- - dump/raw/simu/data/swb_sre_tr_ns1n2n3n4_beta2n2n5n9_100000/espnet_rttm
- spk_labels
- rttm
valid_data_path_and_name_and_type:
- - dump/raw/simu/data/swb_sre_cv_ns1n2n3n4_beta2n2n5n9_500/wav.scp
- speech
- sound
- - dump/raw/simu/data/swb_sre_cv_ns1n2n3n4_beta2n2n5n9_500/espnet_rttm
- spk_labels
- rttm
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0001
scheduler: null
scheduler_conf: {}
num_spk: 4
init: null
input_size: null
model_conf:
attractor_weight: 1.0
use_preprocessor: true
frontend: default
frontend_conf:
fs: 8k
hop_length: 128
specaug: specaug
specaug_conf:
apply_time_warp: false
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/diar_stats_8k/train/feats_stats.npz
encoder: transformer
encoder_conf:
input_layer: conv2d
num_blocks: 4
linear_units: 512
dropout_rate: 0.1
output_size: 256
attention_heads: 4
attention_dropout_rate: 0.1
decoder: linear
decoder_conf: {}
label_aggregator: label_aggregator
label_aggregator_conf:
win_length: 1024
hop_length: 512
attractor: rnn
attractor_conf:
unit: 256
layer: 1
dropout: 0.0
attractor_grad: true
required:
- output_dir
version: '202204'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
b3ck1/gpt-neo-125M-finetuned-beer-recipes
|
b3ck1
| 2022-06-28T19:03:17Z | 14 | 3 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"text generation",
"causal-lm",
"en",
"dataset:custom",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- text generation
- pytorch
- causal-lm
license: apache-2.0
datasets:
- custom
widget:
- text: "style: Pilsner\nbatch_size: 20\nefficiency: 75\nboil_size:"
example_title: "Pilsener"
- text: "style: IPA\nbatch_size: 20\nefficiency: 75\nboil_size:"
example_title: "IPA"
- text: "style: Scottish Ale\nbatch_size: 20\nefficiency: 75\nboil_size:"
example_title: "Scottish Ale"
inference:
parameters:
do_sample: true
top_k: 10
top_p: 0.99
max_length: 500
---
# GPT-Neo 125M finetuned with beer recipes
## Model Description
GPT-Neo 125M is a transformer model based on EleutherAI's replication of the GPT-3 architecture https://huggingface.co/EleutherAI/gpt-neo-125M.
It generates recipes for brewing beer in a YAML-like format which can be easily used for different purposes.
## Training data
This model was trained on a custom dataset of ~ 76,800 beer recipes from the internet. It includes recipes for the following
styles of beer:
* Strong American Ale
* Pale American Ale
* India Pale Ale (IPA)
* Standard American Beer
* Stout
* English Pale Ale
* IPA
* American Porter and Stout
* Sour Ale
* Irish Beer
* Strong British Ale
* Belgian and French Ale
* German Wheat and Rye Beer
* Czech Lager
* Spice/Herb/Vegetable Beer
* Specialty Beer
* American Ale
* Pilsner
* Belgian Ale
* Strong Belgian Ale
* Bock
* Brown British Beer
* German Wheat Beer
* Fruit Beer
* Amber Malty European Lager
* Pale Malty European Lager
* British Bitter
* Amber and Brown American Beer
* Light Hybrid Beer
* Pale Commonwealth Beer
* American Wild Ale
* European Amber Lager
* Belgian Strong Ale
* International Lager
* Amber Bitter European Lager
* Light Lager
* Scottish and Irish Ale
* European Sour Ale
* Trappist Ale
* Strong European Beer
* Porter
* Historical Beer
* Pale Bitter European Beer
* Amber Hybrid Beer
* Smoke Flavored/Wood-Aged Beer
* Spiced Beer
* Dark European Lager
* Alternative Fermentables Beer
* Mead
* Strong Ale
* Dark British Beer
* Scottish Ale
* Smoked Beer
* English Brown Ale
* Dark Lager
* Cider or Perry
* Wood Beer
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different recipe each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='b3ck1/gpt-neo-125M-finetuned-beer-recipes')
>>> generator("style: Pilsner\nbatch_size: 20\nefficiency: 75\nboil_size:", do_sample=True, min_length=50, max_length=500)
>>> print(output[0]['generated_text'])
style: Pilsner
batch_size: 20
efficiency: 70
boil_size: 24
boil_time: 60
fermentables:
- name: Pale Ale
type: Grain
amount: 6.5
hops:
- name: Saaz
alpha: 3.5
use: Boil
time: 60
amount: 0.06
- name: Saaz
alpha: 3.5
use: Boil
time: 30
amount: 0.06
- name: Saaz
alpha: 3.5
use: Boil
time: 10
amount: 0.06
- name: Saaz
alpha: 3.5
use: Boil
time: 0
amount: 0.06
yeasts:
- name: Safale - American Ale Yeast US-05
amount: 0.11
min_temperature: 12
max_temperature: 25
primary_temp: null
mash_steps:
- step_temp: 65
step_time: 60
miscs: []
```
### See this model in action
This model was used to build https://beerai.net.
|
facebook/regnet-x-002
|
facebook
| 2022-06-28T17:54:23Z | 142 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"regnet",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-15T19:34:23Z |
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
|
DeepPavlov/distilrubert-small-cased-conversational
|
DeepPavlov
| 2022-06-28T17:19:09Z | 28,705 | 3 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"ru",
"arxiv:2205.02340",
"arxiv:1910.01108",
"endpoints_compatible",
"region:us"
] | null | 2022-06-28T17:15:00Z |
---
language:
- ru
---
# distilrubert-small-cased-conversational
Conversational DistilRuBERT-small \(Russian, cased, 2‑layer, 768‑hidden, 12‑heads, 107M parameters\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\] (as [Conversational RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational)). It can be considered as small copy of [Conversational DistilRuBERT-base](https://huggingface.co/DeepPavlov/distilrubert-base-cased-conversational).
Our DistilRuBERT-small was highly inspired by \[3\], \[4\]. Namely, we used
* KL loss (between teacher and student output logits)
* MLM loss (between tokens labels and student output logits)
* Cosine embedding loss (between averaged six consecutive hidden states from teacher's encoder and one hidden state of the student)
* MSE loss (between averaged six consecutive attention maps from teacher's encoder and one attention map of the student)
The model was trained for about 80 hrs. on 8 nVIDIA Tesla P100-SXM2.0 16Gb.
To evaluate improvements in the inference speed, we ran teacher and student models on random sequences with seq_len=512, batch_size = 16 (for throughput) and batch_size=1 (for latency).
All tests were performed on Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz and nVIDIA Tesla P100-SXM2.0 16Gb.
| Model | Size, Mb. | CPU latency, sec.| GPU latency, sec. | CPU throughput, samples/sec. | GPU throughput, samples/sec. |
|-------------------------------------------------|------------|------------------|-------------------|------------------------------|------------------------------|
| Teacher (RuBERT-base-cased-conversational) | 679 | 0.655 | 0.031 | 0.3754 | 36.4902 |
| Student (DistilRuBERT-small-cased-conversational)| 409 | 0.1656 | 0.015 | 0.9692 | 71.3553 |
To evaluate model quality, we fine-tuned DistilRuBERT-small on classification, NER and question answering tasks. Scores and archives with fine-tuned models can be found in [DeepPavlov docs](http://docs.deeppavlov.ai/en/master/features/overview.html#models). Also, results could be found in the [paper](https://arxiv.org/abs/2205.02340) Tables 1&2 as well as performance benchmarks and training details.
# Citation
If you found the model useful for your research, we are kindly ask to cite [this](https://arxiv.org/abs/2205.02340) paper:
```
@misc{https://doi.org/10.48550/arxiv.2205.02340,
doi = {10.48550/ARXIV.2205.02340},
url = {https://arxiv.org/abs/2205.02340},
author = {Kolesnikova, Alina and Kuratov, Yuri and Konovalov, Vasily and Burtsev, Mikhail},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Knowledge Distillation of Russian Language Models with Reduction of Vocabulary},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
\[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
\[2\]: Shavrina T., Shapovalova O. \(2017\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017.
\[3\]: Sanh, V., Debut, L., Chaumond, J., & Wolf, T. \(2019\). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
\[4\]: <https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation>
|
ranesh/qw
|
ranesh
| 2022-06-28T17:17:47Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2022-06-28T17:17:47Z |
---
license: bigscience-bloom-rail-1.0
---
|
mariastull/dqn-SpaceInvadersNoFrameSkip-v4
|
mariastull
| 2022-06-28T16:55:20Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-28T16:54:53Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 3.00 +/- 4.58
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mariastull -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mariastull
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
crumb/gpt2-regular-large
|
crumb
| 2022-06-28T16:35:01Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-13T14:08:55Z |
---
tags:
- generated_from_trainer
model-index:
- name: gpt-regular-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-regular-test
i was stupid and all the newline tokens are replaced with [/n] so be wary if you're using the demo on this page that that just means new line
```python
from transformers import AutoTokenizer
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("crumb/gpt2-regular-large")
tokenizer = AutoTokenizer.from_pretrained("gpt2-large", use_fast=True)
prompt = """(Episode begins with Mordecai and Rigby watching TV)
Mordecai: Dude, what are you doing? I think I'm gonna lose my mind.
Rigby:"""
prompt=prompt.replace("\n","[/n]")
tokenz = tokenizer(prompt,return_tensors='pt')['input_ids']
output = model.generate(
tokenz,
max_length=length,
num_return_sequences=1,
top_p=.92,
temperature=.65,
do_sample=True,
top_k=125,
early_stopping=True,
pad_token_id=tokenizer.eos_token_id
)
output = tokenizer.decode(output[0]).replace("[/n]","\n")
print(output)
```
This model is a fine-tuned version of gpt2-large on the entirety of Regular Show. It achieves the following results on the evaluation set (The Power, Death Punchies, Do Me a Solid):
- Loss: 1.6383
## Intended uses & limitations
Same as gpt2-large
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1844 | 1.0 | 7633 | 1.6383 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
fxtentacle/wav2vec2-xls-r-1b-tevr
|
fxtentacle
| 2022-06-28T16:22:18Z | 27 | 14 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"hf-asr-leaderboard",
"de",
"dataset:common_voice",
"arxiv:2206.12693",
"license:apache-2.0",
"model-index",
"region:us"
] |
automatic-speech-recognition
| 2022-06-02T09:09:53Z |
---
language: de
datasets:
- common_voice
inference: false
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: wav2vec 2.0 XLS-R 1B + TEVR tokens + 5-gram LM by Hajo Nils Krabbenhöft
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice de
type: common_voice
args: de
metrics:
- name: Test WER
type: wer
value: 3.6433399042523233
- name: Test CER
type: cer
value: 1.5398893560981173
---
## Overview
This folder contains a fully trained German speech recognition pipeline
consisting of an acoustic model using the new wav2vec 2.0 XLS-R 1B **TEVR** architecture
and a 5-gram KenLM language model.
For an explanation of the TEVR enhancements and their motivation, please see our paper:
[TEVR: Improving Speech Recognition by Token Entropy Variance Reduction](https://arxiv.org/abs/2206.12693).
[](https://paperswithcode.com/sota/speech-recognition-on-common-voice-german?p=tevr-improving-speech-recognition-by-token)
This pipeline scores a very competitive (as of June 2022) **word error rate of 3.64%** on CommonVoice German.
The character error rate was 1.54%.
## Citation
If you use this ASR pipeline for research, please cite:
```bibtex
@misc{https://doi.org/10.48550/arxiv.2206.12693,
doi = {10.48550/ARXIV.2206.12693},
url = {https://arxiv.org/abs/2206.12693},
author = {Krabbenhöft, Hajo Nils and Barth, Erhardt},
keywords = {Computation and Language (cs.CL), Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering, F.2.1; I.2.6; I.2.7},
title = {TEVR: Improving Speech Recognition by Token Entropy Variance Reduction},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
## TEVR Tokenizer Creation / Testing
See https://huggingface.co/fxtentacle/tevr-token-entropy-predictor-de for:
- our trained ByT5 model used to calculate the entropies in the paper
- a Jupyter Notebook to generate a TEVR Tokenizer from a text corpus
- a Jupyter Notebook to generate the illustration image in the paper
## Evaluation
To evalue this pipeline yourself and/or on your own data, see the `HF Eval Script.ipynb` Jupyter Notebook
or use the following python script:
```python
!pip install --quiet --root-user-action=ignore --upgrade pip
!pip install --quiet --root-user-action=ignore "datasets>=1.18.3" "transformers==4.11.3" librosa jiwer huggingface_hub
!pip install --quiet --root-user-action=ignore https://github.com/kpu/kenlm/archive/master.zip pyctcdecode
!pip install --quiet --root-user-action=ignore --upgrade transformers
!pip install --quiet --root-user-action=ignore torch_audiomentations audiomentations
```
```python
from datasets import load_dataset, Audio, load_metric
from transformers import AutoModelForCTC, Wav2Vec2ProcessorWithLM
import torchaudio.transforms as T
import torch
import unicodedata
import numpy as np
import re
# load testing dataset
testing_dataset = load_dataset("common_voice", "de", split="test")
# replace invisible characters with space
allchars = list(set([c for t in testing_dataset['sentence'] for c in list(t)]))
map_to_space = [c for c in allchars if unicodedata.category(c)[0] in 'PSZ' and c not in 'ʻ-']
replacements = ''.maketrans(''.join(map_to_space), ''.join(' ' for i in range(len(map_to_space))), '\'ʻ')
def text_fix(text):
# change ß to ss
text = text.replace('ß','ss')
# convert dash to space and remove double-space
text = text.replace('-',' ').replace(' ',' ').replace(' ',' ')
# make lowercase
text = text.lower()
# remap all invisible characters to space
text = text.translate(replacements).strip()
# for easier comparison to Zimmermeister, replace unrepresentable characters with ?
text = re.sub("[âşěýňעảנźțãòàǔł̇æồאắîשðșęūāñë生בøúıśžçćńřğ]+","?",text)
# remove multiple spaces (again)
text = ' '.join([w for w in text.split(' ') if w != ''])
return text
# load model
model = AutoModelForCTC.from_pretrained("fxtentacle/wav2vec2-xls-r-1b-tevr")
model.to('cuda')
# load processor
class HajoProcessor(Wav2Vec2ProcessorWithLM):
@staticmethod
def get_missing_alphabet_tokens(decoder, tokenizer):
return []
processor = HajoProcessor.from_pretrained("fxtentacle/wav2vec2-xls-r-1b-tevr")
# this function will be called for each WAV file
def predict_single_audio(batch, image=False):
audio = batch['audio']['array']
# resample, if needed
if batch['audio']['sampling_rate'] != 16000:
audio = T.Resample(orig_freq=batch['audio']['sampling_rate'], new_freq=16000)(torch.from_numpy(audio)).numpy()
# normalize
audio = (audio - audio.mean()) / np.sqrt(audio.var() + 1e-7)
# ask HF processor to prepare audio for GPU eval
input_values = processor(audio, return_tensors="pt", sampling_rate=16_000).input_values
# call model on GPU
with torch.no_grad():
logits = model(input_values.to('cuda')).logits.cpu().numpy()[0]
# ask HF processor to decode logits
decoded = processor.decode(logits, beam_width=500)
# return as dictionary
return { 'groundtruth': text_fix(batch['sentence']), 'prediction': decoded.text }
# process all audio files
all_predictions = testing_dataset.map(predict_single_audio, remove_columns=testing_dataset.column_names)
# print results
print('WER', load_metric("wer").compute(predictions=all_predictions['prediction'], references=all_predictions['groundtruth'])*100.0, '%')
print('CER', load_metric("cer").compute(predictions=all_predictions['prediction'], references=all_predictions['groundtruth'])*100.0, '%')
```
WER 3.6433399042523233 %
CER 1.5398893560981173 %
|
Parkerboys211/IDK
|
Parkerboys211
| 2022-06-28T15:45:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-06-28T15:44:55Z |
can someone teach me how to do this pls help me---
license: isc
---
|
facebook/regnet-x-120
|
facebook
| 2022-06-28T15:40:50Z | 68 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"regnet",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-18T15:26:36Z |
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
|
Shanny/dbgbert-finetuned-squad
|
Shanny
| 2022-06-28T15:28:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-27T09:04:37Z |
---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: dbgbert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dbgbert-finetuned-squad
This model was trained from scratch on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Salvatore/bert-finetuned-ner
|
Salvatore
| 2022-06-28T15:24:09Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-16T09:09:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0997
- Proteinmutation F1: 0.1309
- Snp F1: 0.1953
- Dnamutation F1: 0.3778
- Precision: 0.2380
- Recall: 0.2416
- F1: 0.2398
- Accuracy: 0.9703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Proteinmutation F1 | Snp F1 | Dnamutation F1 | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------:|:------:|:--------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 324 | 0.0533 | 0.0396 | 0.2830 | 0.4667 | 0.2334 | 0.3221 | 0.2707 | 0.9788 |
| 0.1072 | 2.0 | 648 | 0.0437 | 0.6065 | 0.4906 | 0.5009 | 0.4802 | 0.6348 | 0.5468 | 0.9868 |
| 0.1072 | 3.0 | 972 | 0.0592 | 0.1379 | 0.2485 | 0.2005 | 0.1639 | 0.2228 | 0.1889 | 0.9731 |
| 0.0573 | 4.0 | 1296 | 0.0722 | 0.0749 | 0.2530 | 0.4692 | 0.2705 | 0.2959 | 0.2826 | 0.9749 |
| 0.0431 | 5.0 | 1620 | 0.0766 | 0.1574 | 0.1847 | 0.2540 | 0.1766 | 0.2285 | 0.1992 | 0.9723 |
| 0.0431 | 6.0 | 1944 | 0.0805 | 0.1099 | 0.2202 | 0.2383 | 0.1657 | 0.2097 | 0.1851 | 0.9715 |
| 0.0396 | 7.0 | 2268 | 0.0886 | 0.1337 | 0.2138 | 0.4318 | 0.2683 | 0.2678 | 0.2680 | 0.9724 |
| 0.0354 | 8.0 | 2592 | 0.0927 | 0.1535 | 0.2113 | 0.3769 | 0.2505 | 0.2528 | 0.2516 | 0.9714 |
| 0.0354 | 9.0 | 2916 | 0.0978 | 0.1011 | 0.2540 | 0.3812 | 0.2495 | 0.2528 | 0.2512 | 0.9705 |
| 0.0312 | 10.0 | 3240 | 0.0997 | 0.1309 | 0.1953 | 0.3778 | 0.2380 | 0.2416 | 0.2398 | 0.9703 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2
- Datasets 2.0.0
- Tokenizers 0.12.1
|
Willy/bert-base-spanish-wwm-cased-finetuned-NLP-IE-4
|
Willy
| 2022-06-28T14:44:34Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-20T07:09:26Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-spanish-wwm-cased-finetuned-NLP-IE-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-NLP-IE-4
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7825
- Accuracy: 0.4931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7005 | 1.0 | 9 | 0.6977 | 0.5069 |
| 0.65 | 2.0 | 18 | 0.7035 | 0.4861 |
| 0.6144 | 3.0 | 27 | 0.7189 | 0.4722 |
| 0.5898 | 4.0 | 36 | 0.7859 | 0.4861 |
| 0.561 | 5.0 | 45 | 0.7825 | 0.4931 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
mmdjiji/bert-chinese-idioms
|
mmdjiji
| 2022-06-28T14:12:31Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-28T02:02:33Z |
---
license: gpl-3.0
---
For the detail, see [github:mmdjiji/bert-chinese-idioms](https://github.com/mmdjiji/bert-chinese-idioms).
|
moonzi/distilbert-base-uncased-finetuned-imdb
|
moonzi
| 2022-06-28T13:46:11Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-28T13:37:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4702
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6898 | 1.0 | 157 | 2.5423 |
| 2.5746 | 2.0 | 314 | 2.4453 |
| 2.5548 | 3.0 | 471 | 2.4528 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v5
|
gary109
| 2022-06-28T11:49:44Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-27T14:51:07Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v5
This model is a fine-tuned version of [gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v4](https://huggingface.co/gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v4) on the GARY109/AI_LIGHT_DANCE - ONSET-STEPMANIA2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0163
- Wer: 0.6622
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8867 | 1.0 | 376 | 1.0382 | 0.6821 |
| 0.8861 | 2.0 | 752 | 1.0260 | 0.6686 |
| 0.8682 | 3.0 | 1128 | 1.0358 | 0.6604 |
| 0.8662 | 4.0 | 1504 | 1.0234 | 0.6665 |
| 0.8463 | 5.0 | 1880 | 1.0333 | 0.6666 |
| 0.8573 | 6.0 | 2256 | 1.0163 | 0.6622 |
| 0.8628 | 7.0 | 2632 | 1.0209 | 0.6551 |
| 0.8493 | 8.0 | 3008 | 1.0525 | 0.6582 |
| 0.8371 | 9.0 | 3384 | 1.0409 | 0.6515 |
| 0.8229 | 10.0 | 3760 | 1.0597 | 0.6523 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
twieland/MIX3_ja-en_helsinki
|
twieland
| 2022-06-28T11:46:58Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-22T00:54:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: MIX3_ja-en_helsinki
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MIX3_ja-en_helsinki
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 2.8699 | 0.01 | 5000 | 2.3465 |
| 2.6168 | 0.02 | 10000 | 2.2205 |
| 2.5083 | 0.03 | 15000 | 2.2382 |
| 2.4359 | 0.04 | 20000 | 2.1670 |
| 2.3821 | 0.06 | 25000 | 2.1122 |
| 2.3358 | 0.07 | 30000 | 2.0902 |
| 2.3045 | 0.08 | 35000 | 2.0461 |
| 2.2782 | 0.09 | 40000 | 2.0290 |
| 2.2481 | 0.1 | 45000 | 1.9910 |
| 2.2267 | 0.11 | 50000 | 2.0059 |
| 2.2056 | 0.12 | 55000 | 1.9858 |
| 2.1903 | 0.13 | 60000 | 1.9725 |
| 2.173 | 0.15 | 65000 | 1.9797 |
| 2.154 | 0.16 | 70000 | 1.9654 |
| 2.1429 | 0.17 | 75000 | 1.9567 |
| 2.1304 | 0.18 | 80000 | 1.9348 |
| 2.1232 | 0.19 | 85000 | 1.9361 |
| 2.116 | 0.2 | 90000 | 1.9277 |
| 2.1016 | 0.21 | 95000 | 1.9193 |
| 2.0984 | 0.22 | 100000 | 1.9064 |
| 2.0797 | 0.24 | 105000 | 1.9177 |
| 2.0767 | 0.25 | 110000 | 1.8975 |
| 2.0642 | 0.26 | 115000 | 1.8782 |
| 2.0595 | 0.27 | 120000 | 1.9012 |
| 2.0533 | 0.28 | 125000 | 1.8977 |
| 2.044 | 0.29 | 130000 | 1.8984 |
| 2.0374 | 0.3 | 135000 | 1.9221 |
| 2.0305 | 0.31 | 140000 | 1.9243 |
| 2.02 | 0.32 | 145000 | 1.8773 |
| 2.0195 | 0.34 | 150000 | 1.8676 |
| 2.0151 | 0.35 | 155000 | 1.8637 |
| 2.0065 | 0.36 | 160000 | 1.8556 |
| 2.0037 | 0.37 | 165000 | 1.8399 |
| 1.9963 | 0.38 | 170000 | 1.8452 |
| 1.9878 | 0.39 | 175000 | 1.8644 |
| 1.9871 | 0.4 | 180000 | 1.8576 |
| 1.9779 | 0.41 | 185000 | 1.8509 |
| 1.9721 | 0.43 | 190000 | 1.8405 |
| 1.9724 | 0.44 | 195000 | 1.8594 |
| 1.9685 | 0.45 | 200000 | 1.8540 |
| 1.9634 | 0.46 | 205000 | 1.8694 |
| 1.9583 | 0.47 | 210000 | 1.8591 |
| 1.9557 | 0.48 | 215000 | 1.8539 |
| 1.9494 | 0.49 | 220000 | 1.8673 |
| 1.9484 | 0.5 | 225000 | 1.8021 |
| 1.9395 | 0.52 | 230000 | 1.8309 |
| 1.9384 | 0.53 | 235000 | 1.7933 |
| 1.937 | 0.54 | 240000 | 1.8199 |
| 1.9315 | 0.55 | 245000 | 1.8065 |
| 1.9276 | 0.56 | 250000 | 1.7857 |
| 1.9248 | 0.57 | 255000 | 1.8207 |
| 1.9195 | 0.58 | 260000 | 1.7898 |
| 1.9187 | 0.59 | 265000 | 1.8097 |
| 1.9138 | 0.6 | 270000 | 1.7909 |
| 1.9094 | 0.62 | 275000 | 1.7995 |
| 1.9098 | 0.63 | 280000 | 1.8165 |
| 1.9038 | 0.64 | 285000 | 1.8132 |
| 1.9034 | 0.65 | 290000 | 1.7951 |
| 1.899 | 0.66 | 295000 | 1.7880 |
| 1.8965 | 0.67 | 300000 | 1.7953 |
| 1.8941 | 0.68 | 305000 | 1.7986 |
| 1.8919 | 0.69 | 310000 | 1.7964 |
| 1.8875 | 0.71 | 315000 | 1.8041 |
| 1.884 | 0.72 | 320000 | 1.7764 |
| 1.8798 | 0.73 | 325000 | 1.8019 |
| 1.8801 | 0.74 | 330000 | 1.7790 |
| 1.8809 | 0.75 | 335000 | 1.7849 |
| 1.8736 | 0.76 | 340000 | 1.7800 |
| 1.8727 | 0.77 | 345000 | 1.7900 |
| 1.8722 | 0.78 | 350000 | 1.7727 |
| 1.8699 | 0.8 | 355000 | 1.7597 |
| 1.8672 | 0.81 | 360000 | 1.7824 |
| 1.8638 | 0.82 | 365000 | 1.7674 |
| 1.8609 | 0.83 | 370000 | 1.7715 |
| 1.8584 | 0.84 | 375000 | 1.7694 |
| 1.8568 | 0.85 | 380000 | 1.7776 |
| 1.8523 | 0.86 | 385000 | 1.7697 |
| 1.8584 | 0.87 | 390000 | 1.7436 |
| 1.8474 | 0.88 | 395000 | 1.7644 |
| 1.8492 | 0.9 | 400000 | 1.7732 |
| 1.8465 | 0.91 | 405000 | 1.7611 |
| 1.846 | 0.92 | 410000 | 1.7717 |
| 1.8431 | 0.93 | 415000 | 1.7514 |
| 1.8402 | 0.94 | 420000 | 1.7353 |
| 1.8398 | 0.95 | 425000 | 1.7720 |
| 1.8314 | 0.96 | 430000 | 1.7728 |
| 1.8322 | 0.97 | 435000 | 1.7491 |
| 1.8284 | 0.99 | 440000 | 1.7561 |
| 1.8301 | 1.0 | 445000 | 1.7499 |
| 1.8182 | 1.01 | 450000 | 1.7514 |
| 1.8111 | 1.02 | 455000 | 1.7596 |
| 1.8116 | 1.03 | 460000 | 1.7455 |
| 1.8098 | 1.04 | 465000 | 1.7495 |
| 1.809 | 1.05 | 470000 | 1.7446 |
| 1.8088 | 1.06 | 475000 | 1.7290 |
| 1.8127 | 1.08 | 480000 | 1.7453 |
| 1.8051 | 1.09 | 485000 | 1.7495 |
| 1.8026 | 1.1 | 490000 | 1.7453 |
| 1.8028 | 1.11 | 495000 | 1.7615 |
| 1.8046 | 1.12 | 500000 | 1.7491 |
| 1.8052 | 1.13 | 505000 | 1.7280 |
| 1.7997 | 1.14 | 510000 | 1.7482 |
| 1.7976 | 1.15 | 515000 | 1.7368 |
| 1.7981 | 1.16 | 520000 | 1.7354 |
| 1.7949 | 1.18 | 525000 | 1.7076 |
| 1.7943 | 1.19 | 530000 | 1.7020 |
| 1.7911 | 1.2 | 535000 | 1.7121 |
| 1.7909 | 1.21 | 540000 | 1.7170 |
| 1.7926 | 1.22 | 545000 | 1.7310 |
| 1.7856 | 1.23 | 550000 | 1.7218 |
| 1.7875 | 1.24 | 555000 | 1.7362 |
| 1.7801 | 1.25 | 560000 | 1.7484 |
| 1.7854 | 1.27 | 565000 | 1.7466 |
| 1.7799 | 1.28 | 570000 | 1.7248 |
| 1.7823 | 1.29 | 575000 | 1.7355 |
| 1.7765 | 1.3 | 580000 | 1.7188 |
| 1.7779 | 1.31 | 585000 | 1.6993 |
| 1.7751 | 1.32 | 590000 | 1.7154 |
| 1.7762 | 1.33 | 595000 | 1.7348 |
| 1.7725 | 1.34 | 600000 | 1.7272 |
| 1.7701 | 1.36 | 605000 | 1.7157 |
| 1.7644 | 1.37 | 610000 | 1.7161 |
| 1.7707 | 1.38 | 615000 | 1.6961 |
| 1.764 | 1.39 | 620000 | 1.6930 |
| 1.7639 | 1.4 | 625000 | 1.6927 |
| 1.7654 | 1.41 | 630000 | 1.6989 |
| 1.7623 | 1.42 | 635000 | 1.6892 |
| 1.7598 | 1.43 | 640000 | 1.6911 |
| 1.7575 | 1.44 | 645000 | 1.7199 |
| 1.7574 | 1.46 | 650000 | 1.6992 |
| 1.7526 | 1.47 | 655000 | 1.6981 |
| 1.7556 | 1.48 | 660000 | 1.6860 |
| 1.7558 | 1.49 | 665000 | 1.7099 |
| 1.7539 | 1.5 | 670000 | 1.6950 |
| 1.7454 | 1.51 | 675000 | 1.6999 |
| 1.748 | 1.52 | 680000 | 1.6871 |
| 1.7476 | 1.53 | 685000 | 1.6884 |
| 1.7493 | 1.55 | 690000 | 1.6984 |
| 1.745 | 1.56 | 695000 | 1.6999 |
| 1.7397 | 1.57 | 700000 | 1.7036 |
| 1.7429 | 1.58 | 705000 | 1.7223 |
| 1.7367 | 1.59 | 710000 | 1.7111 |
| 1.7403 | 1.6 | 715000 | 1.6691 |
| 1.7361 | 1.61 | 720000 | 1.6693 |
| 1.737 | 1.62 | 725000 | 1.6884 |
| 1.7347 | 1.63 | 730000 | 1.6641 |
| 1.7323 | 1.65 | 735000 | 1.6628 |
| 1.7329 | 1.66 | 740000 | 1.6759 |
| 1.7292 | 1.67 | 745000 | 1.6654 |
| 1.7275 | 1.68 | 750000 | 1.6738 |
| 1.7266 | 1.69 | 755000 | 1.6792 |
| 1.7259 | 1.7 | 760000 | 1.6752 |
| 1.7231 | 1.71 | 765000 | 1.6641 |
| 1.7238 | 1.72 | 770000 | 1.6676 |
| 1.7223 | 1.74 | 775000 | 1.6563 |
| 1.722 | 1.75 | 780000 | 1.6541 |
| 1.7195 | 1.76 | 785000 | 1.6560 |
| 1.7171 | 1.77 | 790000 | 1.6786 |
| 1.7187 | 1.78 | 795000 | 1.6434 |
| 1.7186 | 1.79 | 800000 | 1.6538 |
| 1.7115 | 1.8 | 805000 | 1.6535 |
| 1.7119 | 1.81 | 810000 | 1.6738 |
| 1.7106 | 1.83 | 815000 | 1.6597 |
| 1.7088 | 1.84 | 820000 | 1.6486 |
| 1.7079 | 1.85 | 825000 | 1.6576 |
| 1.7062 | 1.86 | 830000 | 1.6676 |
| 1.7084 | 1.87 | 835000 | 1.6449 |
| 1.7059 | 1.88 | 840000 | 1.6515 |
| 1.7057 | 1.89 | 845000 | 1.6609 |
| 1.7021 | 1.9 | 850000 | 1.6482 |
| 1.7005 | 1.91 | 855000 | 1.6653 |
| 1.6988 | 1.93 | 860000 | 1.6801 |
| 1.6964 | 1.94 | 865000 | 1.6830 |
| 1.6954 | 1.95 | 870000 | 1.6589 |
| 1.693 | 1.96 | 875000 | 1.6553 |
| 1.689 | 1.97 | 880000 | 1.6554 |
| 1.69 | 1.98 | 885000 | 1.6424 |
| 1.6893 | 1.99 | 890000 | 1.6628 |
| 1.6772 | 2.0 | 895000 | 1.6709 |
| 1.6703 | 2.02 | 900000 | 1.6627 |
| 1.6726 | 2.03 | 905000 | 1.6612 |
| 1.669 | 2.04 | 910000 | 1.6595 |
| 1.6696 | 2.05 | 915000 | 1.6427 |
| 1.6672 | 2.06 | 920000 | 1.6497 |
| 1.669 | 2.07 | 925000 | 1.6288 |
| 1.6675 | 2.08 | 930000 | 1.6443 |
| 1.6685 | 2.09 | 935000 | 1.6316 |
| 1.6671 | 2.11 | 940000 | 1.6451 |
| 1.6673 | 2.12 | 945000 | 1.6313 |
| 1.6649 | 2.13 | 950000 | 1.6363 |
| 1.6655 | 2.14 | 955000 | 1.6440 |
| 1.6637 | 2.15 | 960000 | 1.6238 |
| 1.6632 | 2.16 | 965000 | 1.6226 |
| 1.6599 | 2.17 | 970000 | 1.6171 |
| 1.6602 | 2.18 | 975000 | 1.6466 |
| 1.658 | 2.19 | 980000 | 1.6341 |
| 1.6571 | 2.21 | 985000 | 1.6500 |
| 1.6572 | 2.22 | 990000 | 1.6225 |
| 1.6572 | 2.23 | 995000 | 1.6296 |
| 1.6552 | 2.24 | 1000000 | 1.6437 |
| 1.6548 | 2.25 | 1005000 | 1.6162 |
| 1.6552 | 2.26 | 1010000 | 1.6223 |
| 1.6544 | 2.27 | 1015000 | 1.6355 |
| 1.6464 | 2.28 | 1020000 | 1.6250 |
| 1.652 | 2.3 | 1025000 | 1.6217 |
| 1.6481 | 2.31 | 1030000 | 1.6079 |
| 1.6466 | 2.32 | 1035000 | 1.6110 |
| 1.6462 | 2.33 | 1040000 | 1.6210 |
| 1.6448 | 2.34 | 1045000 | 1.5993 |
| 1.6461 | 2.35 | 1050000 | 1.6096 |
| 1.6396 | 2.36 | 1055000 | 1.6137 |
| 1.644 | 2.37 | 1060000 | 1.6189 |
| 1.6396 | 2.39 | 1065000 | 1.6211 |
| 1.639 | 2.4 | 1070000 | 1.6149 |
| 1.6358 | 2.41 | 1075000 | 1.6144 |
| 1.6356 | 2.42 | 1080000 | 1.6018 |
| 1.6364 | 2.43 | 1085000 | 1.5999 |
| 1.6352 | 2.44 | 1090000 | 1.6095 |
| 1.634 | 2.45 | 1095000 | 1.6114 |
| 1.6279 | 2.46 | 1100000 | 1.6156 |
| 1.6272 | 2.47 | 1105000 | 1.6124 |
| 1.6319 | 2.49 | 1110000 | 1.6046 |
| 1.6276 | 2.5 | 1115000 | 1.6152 |
| 1.6285 | 2.51 | 1120000 | 1.6129 |
| 1.6242 | 2.52 | 1125000 | 1.5984 |
| 1.6261 | 2.53 | 1130000 | 1.6116 |
| 1.623 | 2.54 | 1135000 | 1.6061 |
| 1.6203 | 2.55 | 1140000 | 1.6182 |
| 1.62 | 2.56 | 1145000 | 1.5887 |
| 1.6177 | 2.58 | 1150000 | 1.5731 |
| 1.6172 | 2.59 | 1155000 | 1.5990 |
| 1.6179 | 2.6 | 1160000 | 1.5965 |
| 1.6206 | 2.61 | 1165000 | 1.6000 |
| 1.6156 | 2.62 | 1170000 | 1.5873 |
| 1.6124 | 2.63 | 1175000 | 1.5899 |
| 1.613 | 2.64 | 1180000 | 1.5910 |
| 1.6134 | 2.65 | 1185000 | 1.6017 |
| 1.609 | 2.67 | 1190000 | 1.5822 |
| 1.6084 | 2.68 | 1195000 | 1.5906 |
| 1.6101 | 2.69 | 1200000 | 1.6218 |
| 1.6077 | 2.7 | 1205000 | 1.6149 |
| 1.6057 | 2.71 | 1210000 | 1.5994 |
| 1.6018 | 2.72 | 1215000 | 1.5839 |
| 1.6049 | 2.73 | 1220000 | 1.5864 |
| 1.6012 | 2.74 | 1225000 | 1.5994 |
| 1.6013 | 2.75 | 1230000 | 1.5821 |
| 1.5957 | 2.77 | 1235000 | 1.5964 |
| 1.5971 | 2.78 | 1240000 | 1.5897 |
| 1.5967 | 2.79 | 1245000 | 1.5774 |
| 1.5927 | 2.8 | 1250000 | 1.5861 |
| 1.5954 | 2.81 | 1255000 | 1.5789 |
| 1.5937 | 2.82 | 1260000 | 1.5739 |
| 1.5895 | 2.83 | 1265000 | 1.5701 |
| 1.5912 | 2.84 | 1270000 | 1.5622 |
| 1.5922 | 2.86 | 1275000 | 1.5730 |
| 1.5883 | 2.87 | 1280000 | 1.5775 |
| 1.5864 | 2.88 | 1285000 | 1.5726 |
| 1.5837 | 2.89 | 1290000 | 1.5679 |
| 1.5824 | 2.9 | 1295000 | 1.5683 |
| 1.5817 | 2.91 | 1300000 | 1.5508 |
| 1.5778 | 2.92 | 1305000 | 1.5620 |
| 1.5822 | 2.93 | 1310000 | 1.5556 |
| 1.5783 | 2.95 | 1315000 | 1.5693 |
| 1.5751 | 2.96 | 1320000 | 1.5781 |
| 1.5716 | 2.97 | 1325000 | 1.5655 |
| 1.5765 | 2.98 | 1330000 | 1.5528 |
| 1.5728 | 2.99 | 1335000 | 1.5748 |
| 1.5672 | 3.0 | 1340000 | 1.5597 |
| 1.5467 | 3.01 | 1345000 | 1.5461 |
| 1.547 | 3.02 | 1350000 | 1.5516 |
| 1.5462 | 3.03 | 1355000 | 1.5519 |
| 1.5464 | 3.05 | 1360000 | 1.5593 |
| 1.5457 | 3.06 | 1365000 | 1.5576 |
| 1.5441 | 3.07 | 1370000 | 1.5653 |
| 1.544 | 3.08 | 1375000 | 1.5662 |
| 1.5467 | 3.09 | 1380000 | 1.5611 |
| 1.5439 | 3.1 | 1385000 | 1.5635 |
| 1.5449 | 3.11 | 1390000 | 1.5467 |
| 1.5417 | 3.12 | 1395000 | 1.5495 |
| 1.5428 | 3.14 | 1400000 | 1.5552 |
| 1.5432 | 3.15 | 1405000 | 1.5347 |
| 1.5401 | 3.16 | 1410000 | 1.5394 |
| 1.5391 | 3.17 | 1415000 | 1.5497 |
| 1.539 | 3.18 | 1420000 | 1.5431 |
| 1.5368 | 3.19 | 1425000 | 1.5479 |
| 1.5365 | 3.2 | 1430000 | 1.5513 |
| 1.5327 | 3.21 | 1435000 | 1.5467 |
| 1.5337 | 3.23 | 1440000 | 1.5477 |
| 1.5317 | 3.24 | 1445000 | 1.5398 |
| 1.5315 | 3.25 | 1450000 | 1.5481 |
| 1.532 | 3.26 | 1455000 | 1.5385 |
| 1.5312 | 3.27 | 1460000 | 1.5520 |
| 1.5328 | 3.28 | 1465000 | 1.5423 |
| 1.5288 | 3.29 | 1470000 | 1.5489 |
| 1.5271 | 3.3 | 1475000 | 1.5395 |
| 1.5273 | 3.31 | 1480000 | 1.5335 |
| 1.5235 | 3.33 | 1485000 | 1.5381 |
| 1.5224 | 3.34 | 1490000 | 1.5289 |
| 1.5206 | 3.35 | 1495000 | 1.5331 |
| 1.5189 | 3.36 | 1500000 | 1.5343 |
| 1.5152 | 3.37 | 1505000 | 1.5246 |
| 1.5225 | 3.38 | 1510000 | 1.5280 |
| 1.5168 | 3.39 | 1515000 | 1.5315 |
| 1.5161 | 3.4 | 1520000 | 1.5284 |
| 1.5111 | 3.42 | 1525000 | 1.5278 |
| 1.5154 | 3.43 | 1530000 | 1.5148 |
| 1.515 | 3.44 | 1535000 | 1.5286 |
| 1.5117 | 3.45 | 1540000 | 1.5291 |
| 1.5099 | 3.46 | 1545000 | 1.5320 |
| 1.5097 | 3.47 | 1550000 | 1.5323 |
| 1.5075 | 3.48 | 1555000 | 1.5157 |
| 1.5059 | 3.49 | 1560000 | 1.5214 |
| 1.5011 | 3.51 | 1565000 | 1.5199 |
| 1.5074 | 3.52 | 1570000 | 1.5114 |
| 1.5033 | 3.53 | 1575000 | 1.5145 |
| 1.5009 | 3.54 | 1580000 | 1.5184 |
| 1.4994 | 3.55 | 1585000 | 1.5125 |
| 1.5041 | 3.56 | 1590000 | 1.5048 |
| 1.5002 | 3.57 | 1595000 | 1.5156 |
| 1.4967 | 3.58 | 1600000 | 1.5176 |
| 1.4923 | 3.59 | 1605000 | 1.5128 |
| 1.495 | 3.61 | 1610000 | 1.5188 |
| 1.4929 | 3.62 | 1615000 | 1.5149 |
| 1.4921 | 3.63 | 1620000 | 1.5097 |
| 1.4916 | 3.64 | 1625000 | 1.5161 |
| 1.4852 | 3.65 | 1630000 | 1.5134 |
| 1.4881 | 3.66 | 1635000 | 1.5101 |
| 1.4873 | 3.67 | 1640000 | 1.5027 |
| 1.4911 | 3.68 | 1645000 | 1.4968 |
| 1.488 | 3.7 | 1650000 | 1.4962 |
| 1.4842 | 3.71 | 1655000 | 1.5030 |
| 1.4829 | 3.72 | 1660000 | 1.5041 |
| 1.4816 | 3.73 | 1665000 | 1.5076 |
| 1.479 | 3.74 | 1670000 | 1.5029 |
| 1.4768 | 3.75 | 1675000 | 1.5053 |
| 1.4769 | 3.76 | 1680000 | 1.5026 |
| 1.4781 | 3.77 | 1685000 | 1.5016 |
| 1.4781 | 3.79 | 1690000 | 1.5034 |
| 1.4777 | 3.8 | 1695000 | 1.4976 |
| 1.4736 | 3.81 | 1700000 | 1.5002 |
| 1.4715 | 3.82 | 1705000 | 1.4995 |
| 1.4716 | 3.83 | 1710000 | 1.4996 |
| 1.4648 | 3.84 | 1715000 | 1.4952 |
| 1.4711 | 3.85 | 1720000 | 1.4934 |
| 1.4682 | 3.86 | 1725000 | 1.4965 |
| 1.4659 | 3.87 | 1730000 | 1.4932 |
| 1.4689 | 3.89 | 1735000 | 1.4920 |
| 1.4656 | 3.9 | 1740000 | 1.4910 |
| 1.4666 | 3.91 | 1745000 | 1.4893 |
| 1.4611 | 3.92 | 1750000 | 1.4888 |
| 1.4623 | 3.93 | 1755000 | 1.4898 |
| 1.4637 | 3.94 | 1760000 | 1.4909 |
| 1.4585 | 3.95 | 1765000 | 1.4858 |
| 1.4586 | 3.96 | 1770000 | 1.4847 |
| 1.4579 | 3.98 | 1775000 | 1.4841 |
| 1.458 | 3.99 | 1780000 | 1.4840 |
| 1.4572 | 4.0 | 1785000 | 1.4832 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
facebook/regnet-y-016
|
facebook
| 2022-06-28T11:38:42Z | 64 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"regnet",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-18T15:34:34Z |
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
|
muhammedshihebi/test_Model
|
muhammedshihebi
| 2022-06-28T10:32:10Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"xlm-roberta",
"question-answering",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-28T10:31:50Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: test_Model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# test_Model
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fusing/glide-base
|
fusing
| 2022-06-28T10:27:26Z | 0 | 2 | null |
[
"arxiv:2112.10741",
"license:apache-2.0",
"region:us"
] | null | 2022-06-07T12:52:41Z |
---
license: apache-2.0
---
GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models
**Paper**: [GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models](https://arxiv.org/abs/2112.10741)
**Abstract**:
*Diffusion models have recently been shown to generate high-quality synthetic images, especially when paired with a guidance technique to trade off diversity for fidelity. We explore diffusion models for the problem of text-conditional image synthesis and compare two different guidance strategies: CLIP guidance and classifier-free guidance. We find that the latter is preferred by human evaluators for both photorealism and caption similarity, and often produces photorealistic samples. Samples from a 3.5 billion parameter text-conditional diffusion model using classifier-free guidance are favored by human evaluators to those from DALL-E, even when the latter uses expensive CLIP reranking. Additionally, we find that our models can be fine-tuned to perform image inpainting, enabling powerful text-driven image editing.*
## Usage
```python
# !pip install diffusers
import torch
from diffusers import DiffusionPipeline
import PIL.Image
model_id = "fusing/glide-base"
# load model and scheduler
pipeline = DiffusionPipeline.from_pretrained(model_id)
# run inference (text-conditioned denoising + upscaling)
img = pipeline("a crayon drawing of a corgi")
# process image to PIL
img = img.squeeze(0)
img = ((img + 1)*127.5).round().clamp(0, 255).to(torch.uint8).cpu().numpy()
image_pil = PIL.Image.fromarray(img)
# save image
image_pil.save("test.png")
```
## Samples
1. 
2. 
3. 
|
Shanny/bert-finetuned-squad
|
Shanny
| 2022-06-28T10:07:41Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-26T21:27:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
SerdarHelli/ThyroidTumorClassificationModel
|
SerdarHelli
| 2022-06-28T09:52:22Z | 92 | 2 |
transformers
|
[
"transformers",
"pytorch",
"convnext",
"image-classification",
"medicalimaging",
"thyroidtumor",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-26T10:52:45Z |
---
tags:
- medicalimaging
- thyroidtumor
metrics:
- accuracy
---
Thyroid nodule is one of the most common endocrine carcinomas. Due to its higher reveal ability and ability to distinguish between benign and malignant nodules in pathological features, ultrasonography has become the most widely used modality for finding and diagnosing thyroid cancer when compared to CT and MRI.
In this study, the purpose is the classification of thyroid tumors on ultrasound images with 2 different categories:
- Malign(1)
- Benign(0)
This study was made using HF Transformers :
- [ On Google Colab](https://colab.research.google.com/drive/1ueSq8Y_NmFr7NGdtS8FStI3d2HR-43LD?usp=sharing)
- [On Github](https://github.com/SerdarHelli/The-Classification-of-Thyroid-Tumors-on-UltraSound-Images-using-Deep-Learning-Methods)
- [ Using Keras and GradCam With MultiClasses Medium Article](https://serdarhelli.medium.com/the-basic-classification-of-thyroid-tumors-on-ultrasound-images-using-deep-learning-methods-46f812d859ea)
The Dataset:
[Colombia National University presented an open access database of thyroid ultrasound images.](http://cimalab.unal.edu.co/?lang=es&mod=program&id=5)
Ref : Pedraza, Lina & Vargas, Carlos & Narváez, Fabián & Durán, Oscar & Muñoz, Emma & Romero, Eduardo. (2015). An open access thyroid ultrasound-image Database. Progress in Biomedical Optics and Imaging — Proceedings of SPIE. 9287. 10.1117/12.2073532.
|
rtorrero/my-first-model
|
rtorrero
| 2022-06-28T08:44:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-06-28T07:41:49Z |
This is just me playing around with Hugging Face :-)
|
vebie91/dqn-SpaceInvadersNoFrameskip-v4-1.2
|
vebie91
| 2022-06-28T04:33:56Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-28T04:33:19Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 563.00 +/- 159.85
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga vebie91 -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga vebie91
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 6),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 2000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
mastak128/unit1
|
mastak128
| 2022-06-28T04:20:00Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-28T04:19:30Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 182.30 +/- 78.62
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
chandrasutrisnotjhong/marian-finetuned-kde4-en-to-fr
|
chandrasutrisnotjhong
| 2022-06-28T04:10:31Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-06-22T02:01:51Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.83242564204547
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8560
- Bleu: 52.8324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
JeremiahZ/reproduce-sup-roberta-base-avg
|
JeremiahZ
| 2022-06-28T04:10:25Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"generated_from_trainer",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-06-27T08:38:05Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
model-index:
- name: reproduce-sup-roberta-base-avg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reproduce-sup-roberta-base-avg
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
jackieliu930/bart-large-cnn-samsum
|
jackieliu930
| 2022-06-28T03:46:12Z | 15 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"sagemaker",
"summarization",
"en",
"dataset:samsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- sagemaker
- bart
- summarization
license: apache-2.0
datasets:
- samsum
model-index:
- name: bart-large-cnn-samsum
results:
- task:
name: Abstractive Text Summarization
type: abstractive-text-summarization
dataset:
name: 'SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization'
type: samsum
metrics:
- name: Validation ROGUE-1
type: rogue-1
value: 42.621
- name: Validation ROGUE-2
type: rogue-2
value: 21.9825
- name: Validation ROGUE-L
type: rogue-l
value: 33.034
- name: Test ROGUE-1
type: rogue-1
value: 41.3174
- name: Test ROGUE-2
type: rogue-2
value: 20.8716
- name: Test ROGUE-L
type: rogue-l
value: 32.1337
- task:
type: summarization
name: Summarization
dataset:
name: samsum
type: samsum
config: samsum
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 40.8911
verified: true
- name: ROUGE-2
type: rouge
value: 20.3551
verified: true
- name: ROUGE-L
type: rouge
value: 31.2696
verified: true
- name: ROUGE-LSUM
type: rouge
value: 37.9313
verified: true
- name: loss
type: loss
value: 1.4995627403259277
verified: true
- name: gen_len
type: gen_len
value: 60.2247
verified: true
widget:
- text: "Jeff: Can I train a \U0001F917 Transformers model on Amazon SageMaker? \n\
Philipp: Sure you can use the new Hugging Face Deep Learning Container. \nJeff:\
\ ok.\nJeff: and how can I get started? \nJeff: where can I find documentation?\
\ \nPhilipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face "
---
## `bart-large-cnn-samsum`
This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container.
For more information look at:
- [🤗 Transformers Documentation: Amazon SageMaker](https://huggingface.co/transformers/sagemaker.html)
- [Example Notebooks](https://github.com/huggingface/notebooks/tree/master/sagemaker)
- [Amazon SageMaker documentation for Hugging Face](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html)
- [Python SDK SageMaker documentation for Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/index.html)
- [Deep Learning Container](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-training-containers)
## Hyperparameters
{
"dataset_name": "samsum",
"do_eval": true,
"do_predict": true,
"do_train": true,
"fp16": true,
"learning_rate": 5e-05,
"model_name_or_path": "facebook/bart-large-cnn",
"num_train_epochs": 3,
"output_dir": "/opt/ml/model",
"per_device_eval_batch_size": 4,
"per_device_train_batch_size": 4,
"predict_with_generate": true,
"sagemaker_container_log_level": 20,
"sagemaker_job_name": "huggingface-pytorch-training-2021-09-08-06-40-19-182",
"sagemaker_program": "run_summarization.py",
"sagemaker_region": "us-west-2",
"sagemaker_submit_directory": "s3://sagemaker-us-west-2-847380964353/huggingface-pytorch-training-2021-09-08-06-40-19-182/source/sourcedir.tar.gz",
"seed": 7
}
## Usage
from transformers import pipeline
summarizer = pipeline("summarization", model="philschmid/bart-large-cnn-samsum")
conversation = '''Jeff: Can I train a 🤗 Transformers model on Amazon SageMaker?
Philipp: Sure you can use the new Hugging Face Deep Learning Container.
Jeff: ok.
Jeff: and how can I get started?
Jeff: where can I find documentation?
Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face
'''
nlp(conversation)
## Results
| key | value |
| --- | ----- |
| eval_rouge1 | 42.059 |
| eval_rouge2 | 21.5509 |
| eval_rougeL | 32.4083 |
| eval_rougeLsum | 39.0015 |
| test_rouge1 | 40.8656 |
| test_rouge2 | 20.3517 |
| test_rougeL | 31.2268 |
| test_rougeLsum | 37.9301 |
|
jmwolf27/finetuning-sentiment-model-3000-samples
|
jmwolf27
| 2022-06-28T02:19:32Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-28T02:00:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.877887788778878
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3167
- Accuracy: 0.8767
- F1: 0.8779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ajtamayoh/Negation_Scope_Detection_SFU_Spanish_NLP-CIC-WFU_DisTEMIST_fine_tuned
|
ajtamayoh
| 2022-06-28T02:13:29Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-28T01:50:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Negation_Scope_Detection_SFU_Spanish_NLP-CIC-WFU_DisTEMIST_fine_tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Negation_Scope_Detection_SFU_Spanish_NLP-CIC-WFU_DisTEMIST_fine_tuned
This model is a fine-tuned version of [ajtamayoh/NER_EHR_Spanish_model_Mulitlingual_BERT](https://huggingface.co/ajtamayoh/NER_EHR_Spanish_model_Mulitlingual_BERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3219
- Precision: 0.7403
- Recall: 0.7571
- F1: 0.7486
- Accuracy: 0.9518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 72 | 0.2142 | 0.5227 | 0.6497 | 0.5793 | 0.9267 |
| No log | 2.0 | 144 | 0.2019 | 0.625 | 0.7062 | 0.6631 | 0.9420 |
| No log | 3.0 | 216 | 0.3089 | 0.6444 | 0.6554 | 0.6499 | 0.9432 |
| No log | 4.0 | 288 | 0.2376 | 0.6952 | 0.7345 | 0.7143 | 0.9478 |
| No log | 5.0 | 360 | 0.2876 | 0.7037 | 0.7514 | 0.7268 | 0.9538 |
| No log | 6.0 | 432 | 0.3077 | 0.7278 | 0.7401 | 0.7339 | 0.9534 |
| 0.091 | 7.0 | 504 | 0.3219 | 0.7403 | 0.7571 | 0.7486 | 0.9518 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Abdelmageed95/distilgpt2-finetuned-wikitext2
|
Abdelmageed95
| 2022-06-27T22:58:48Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-27T22:27:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7602 | 1.0 | 2334 | 3.6669 |
| 3.653 | 2.0 | 4668 | 3.6472 |
| 3.6006 | 3.0 | 7002 | 3.6421 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
matteopilotto/vit-base-patch16-224-in21k-snacks
|
matteopilotto
| 2022-06-27T22:19:35Z | 65 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"dataset:Matthijs/snacks",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-05-14T16:23:18Z |
---
datasets:
- Matthijs/snacks
model-index:
- name: matteopilotto/vit-base-patch16-224-in21k-snacks
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: Matthijs/snacks
type: Matthijs/snacks
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.8928571428571429
verified: true
- name: Precision Macro
type: precision
value: 0.8990033704680036
verified: true
- name: Precision Micro
type: precision
value: 0.8928571428571429
verified: true
- name: Precision Weighted
type: precision
value: 0.8972398709051788
verified: true
- name: Recall Macro
type: recall
value: 0.8914608843537415
verified: true
- name: Recall Micro
type: recall
value: 0.8928571428571429
verified: true
- name: Recall Weighted
type: recall
value: 0.8928571428571429
verified: true
- name: F1 Macro
type: f1
value: 0.892544821273258
verified: true
- name: F1 Micro
type: f1
value: 0.8928571428571429
verified: true
- name: F1 Weighted
type: f1
value: 0.8924168605019522
verified: true
- name: loss
type: loss
value: 0.479541540145874
verified: true
---
# Vision Transformer fine-tuned on `Matthijs/snacks` dataset
Vision Transformer (ViT) model pre-trained on ImageNet-21k and fine-tuned on [**Matthijs/snacks**](https://huggingface.co/datasets/Matthijs/snacks) for 5 epochs using various data augmentation transformations from `torchvision`.
The model achieves a **94.97%** and **94.43%** accuracy on the validation and test set, respectively.
## Data augmentation pipeline
The code block below shows the various transformations applied during pre-processing to augment the original dataset.
The augmented images where generated on-the-fly with the `set_transform` method.
```python
from transformers import ViTFeatureExtractor
from torchvision.transforms import (
Compose,
Normalize,
Resize,
RandomResizedCrop,
RandomHorizontalFlip,
RandomAdjustSharpness,
ToTensor
)
checkpoint = 'google/vit-base-patch16-224-in21k'
feature_extractor = ViTFeatureExtractor.from_pretrained(checkpoint)
# transformations on the training set
train_aug_transforms = Compose([
RandomResizedCrop(size=feature_extractor.size),
RandomHorizontalFlip(p=0.5),
RandomAdjustSharpness(sharpness_factor=5, p=0.5),
ToTensor(),
Normalize(mean=feature_extractor.image_mean, std=feature_extractor.image_std),
])
# transformations on the validation/test set
valid_aug_transforms = Compose([
Resize(size=(feature_extractor.size, feature_extractor.size)),
ToTensor(),
Normalize(mean=feature_extractor.image_mean, std=feature_extractor.image_std),
])
```
|
huggingtweets/borisdayma
|
huggingtweets
| 2022-06-27T21:46:28Z | 67 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/borisdayma/1656366383066/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1152601773330370560/UhVRDMyp_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Boris Dayma 🖍️</div>
<div style="text-align: center; font-size: 14px;">@borisdayma</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Boris Dayma 🖍️.
| Data | Boris Dayma 🖍️ |
| --- | --- |
| Tweets downloaded | 1371 |
| Retweets | 146 |
| Short tweets | 42 |
| Tweets kept | 1183 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/tlbliehz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @borisdayma's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3qs9dfef) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3qs9dfef/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/borisdayma')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
hidude562/discordgpt2mini
|
hidude562
| 2022-06-27T21:19:20Z | 0 | 1 | null |
[
"generated_from_trainer",
"license:mit",
"region:us"
] | null | 2022-05-05T09:56:43Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-discordgpt2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-discordgpt2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 5.3032
- eval_runtime: 59.2004
- eval_samples_per_second: 274.542
- eval_steps_per_second: 34.324
- epoch: 0.26
- step: 25500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
SEBIS/code_trans_t5_small_program_synthese_transfer_learning_finetune
|
SEBIS
| 2022-06-27T20:56:39Z | 34 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"feature-extraction",
"summarization",
"arxiv:2104.02443",
"arxiv:1910.09700",
"arxiv:2105.09680",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:04Z |
---
tags:
- summarization
widget:
- text: "you are given an array of numbers a and a number b , compute the difference of elements in a and b"
---
# CodeTrans model for program synthesis
## Table of Contents
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Citation Information](#citation-information)
## Model Details
- **Model Description:** This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the program synthesis task for the lisp inspired DSL code.
- **Developed by:** [Ahmed Elnaggar](https://www.linkedin.com/in/prof-ahmed-elnaggar/),[Wei Ding](https://www.linkedin.com/in/wei-ding-92561270/)
- **Model Type:** Summarization
- **Language(s):** English
- **License:** Unknown
- **Resources for more information:**
- [Research Paper](https://arxiv.org/pdf/2104.02443.pdf)
- [GitHub Repo](https://github.com/agemagician/CodeTrans)
## How to Get Started With the Model
Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_program_synthese_transfer_learning_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_program_synthese_transfer_learning_finetune", skip_special_tokens=True),
device=0
)
tokenized_code = "you are given an array of numbers a and a number b , compute the difference of elements in a and b"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/transfer%20learning%20fine-tuning/small_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Uses
#### Direct Use
The model could be used to generate lisp inspired DSL code given the human language description tasks.
## Risks, Limitations and Biases
As detailed in this model’s [publication](https://arxiv.org/pdf/2104.02443.pdf), this model makes use of the data-set [One Billion Word Language Model Benchmark corpus](https://www.researchgate.net/publication/259239818_One_Billion_Word_Benchmark_for_Measuring_Progress_in_Statistical_Language_Modeling) in order to gather the self-supervised English data samples.
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
As such, it should be noted that language models that are pretrained from text corpus such as the One Billion Word Word Language Model Benchmark corpus have been further explored (e.g by [Ngo, Helen & Araújo et al(2021)](https://www.researchgate.net/publication/355582954_No_News_is_Good_News_A_Critique_of_the_One_Billion_Word_Benchmark) reports that the One Billion Word Word Language Model Benchmark corpus
> “generate text in the linguistic style of news, without any grounding in the real world. In addition to potential harms from models which are inadvertently optimized for generating fake news.”
The aforementioned publication continues to warn that the One Billion Word Word Language Model Benchmark corpus
> contains sentences which contain words commonly found on blocklists. While these sentences may have plausibly been used in expository contexts within the article, the destructive sentence-level preprocessing and shuffling applied to lm1b [One Billion Word Word Language Model Benchmark corpus] removes all long-range structure from the text and makes it infeasible to track the context and intent of individual examples.
[Ngo, Helen & Araújo et al(2021)](https://www.researchgate.net/publication/355582954_No_News_is_Good_News_A_Critique_of_the_One_Billion_Word_Benchmark)
## Training
#### Training Data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
The authors provide additionally notes about the vocabulary used, in the [associated paper](https://arxiv.org/pdf/2104.02443.pdf):
> We used the SentencePiece model (Kudo, 2018) to construct the vocabulary for this research, as well as to decode and encode the input/output.
## Training procedure
#### Preprocessing
##### Transfer-learning Pretraining
The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
###### Fine-tuning
This model was then fine-tuned on a single TPU Pod V2-8 for 5,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing lisp inspired DSL data.
## Evaluation
#### Results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | LISP |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 89.43 |
| CodeTrans-ST-Base | 89.65 |
| CodeTrans-TF-Small | 90.30 |
| CodeTrans-TF-Base | 90.24 |
| CodeTrans-TF-Large | 90.21 |
| CodeTrans-MT-Small | 82.88 |
| CodeTrans-MT-Base | 86.99 |
| CodeTrans-MT-Large | 90.27 |
| CodeTrans-MT-TF-Small | **90.31** |
| CodeTrans-MT-TF-Base | 90.30 |
| CodeTrans-MT-TF-Large | 90.17 |
| State of the art | 85.80 |
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type based on the [associated paper](https://arxiv.org/pdf/2105.09680.pdf).
- **Hardware Type:** Nvidia RTX 8000 GPUs
- **Hours used:** Unknown
- **Cloud Provider:** GCC TPU v2-8 and v3-8.
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Citation Information
```bibtex
@misc{elnaggar2021codetrans,
title={CodeTrans: Towards Cracking the Language of Silicon's Code Through Self-Supervised Deep Learning and High Performance Computing},
author={Ahmed Elnaggar and Wei Ding and Llion Jones and Tom Gibbs and Tamas Feher and Christoph Angerer and Silvia Severini and Florian Matthes and Burkhard Rost},
year={2021},
eprint={2104.02443},
archivePrefix={arXiv},
primaryClass={cs.SE}
}
```
|
huggingface/CodeBERTa-small-v1
|
huggingface
| 2022-06-27T15:48:41Z | 35,783 | 76 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"code",
"dataset:code_search_net",
"arxiv:1909.09436",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: code
thumbnail: https://cdn-media.huggingface.co/CodeBERTa/CodeBERTa.png
datasets:
- code_search_net
---
# CodeBERTa
CodeBERTa is a RoBERTa-like model trained on the [CodeSearchNet](https://github.blog/2019-09-26-introducing-the-codesearchnet-challenge/) dataset from GitHub.
Supported languages:
```shell
"go"
"java"
"javascript"
"php"
"python"
"ruby"
```
The **tokenizer** is a Byte-level BPE tokenizer trained on the corpus using Hugging Face `tokenizers`.
Because it is trained on a corpus of code (vs. natural language), it encodes the corpus efficiently (the sequences are between 33% to 50% shorter, compared to the same corpus tokenized by gpt2/roberta).
The (small) **model** is a 6-layer, 84M parameters, RoBERTa-like Transformer model – that’s the same number of layers & heads as DistilBERT – initialized from the default initialization settings and trained from scratch on the full corpus (~2M functions) for 5 epochs.
### Tensorboard for this training ⤵️
[](https://tensorboard.dev/experiment/irRI7jXGQlqmlxXS0I07ew/#scalars)
## Quick start: masked language modeling prediction
```python
PHP_CODE = """
public static <mask> set(string $key, $value) {
if (!in_array($key, self::$allowedKeys)) {
throw new \InvalidArgumentException('Invalid key given');
}
self::$storedValues[$key] = $value;
}
""".lstrip()
```
### Does the model know how to complete simple PHP code?
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="huggingface/CodeBERTa-small-v1",
tokenizer="huggingface/CodeBERTa-small-v1"
)
fill_mask(PHP_CODE)
## Top 5 predictions:
#
' function' # prob 0.9999827146530151
'function' #
' void' #
' def' #
' final' #
```
### Yes! That was easy 🎉 What about some Python (warning: this is going to be meta)
```python
PYTHON_CODE = """
def pipeline(
task: str,
model: Optional = None,
framework: Optional[<mask>] = None,
**kwargs
) -> Pipeline:
pass
""".lstrip()
```
Results:
```python
'framework', 'Framework', ' framework', 'None', 'str'
```
> This program can auto-complete itself! 😱
### Just for fun, let's try to mask natural language (not code):
```python
fill_mask("My name is <mask>.")
# {'sequence': '<s> My name is undefined.</s>', 'score': 0.2548016905784607, 'token': 3353}
# {'sequence': '<s> My name is required.</s>', 'score': 0.07290805131196976, 'token': 2371}
# {'sequence': '<s> My name is null.</s>', 'score': 0.06323737651109695, 'token': 469}
# {'sequence': '<s> My name is name.</s>', 'score': 0.021919190883636475, 'token': 652}
# {'sequence': '<s> My name is disabled.</s>', 'score': 0.019681859761476517, 'token': 7434}
```
This (kind of) works because code contains comments (which contain natural language).
Of course, the most frequent name for a Computer scientist must be undefined 🤓.
## Downstream task: [programming language identification](https://huggingface.co/huggingface/CodeBERTa-language-id)
See the model card for **[`huggingface/CodeBERTa-language-id`](https://huggingface.co/huggingface/CodeBERTa-language-id)** 🤯.
<br>
## CodeSearchNet citation
<details>
```bibtex
@article{husain_codesearchnet_2019,
title = {{CodeSearchNet} {Challenge}: {Evaluating} the {State} of {Semantic} {Code} {Search}},
shorttitle = {{CodeSearchNet} {Challenge}},
url = {http://arxiv.org/abs/1909.09436},
urldate = {2020-03-12},
journal = {arXiv:1909.09436 [cs, stat]},
author = {Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
month = sep,
year = {2019},
note = {arXiv: 1909.09436},
}
```
</details>
|
microsoft/deberta-xlarge-mnli
|
microsoft
| 2022-06-27T15:47:33Z | 504,931 | 16 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"deberta",
"text-classification",
"deberta-v1",
"deberta-mnli",
"en",
"arxiv:2006.03654",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- deberta-v1
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
widget:
- text: "[CLS] I love you. [SEP] I like you. [SEP]"
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This the DeBERTa xlarge model(750M) fine-tuned with mnli task.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, you need to specify **--sharded_ddp**
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mrpc
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 4 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
```
|
jcastanyo/dqn-SpaceInvadersNoFrameskip-v4
|
jcastanyo
| 2022-06-27T15:43:15Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-27T15:42:39Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 644.00 +/- 281.09
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jcastanyo -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jcastanyo
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
begmannen/ju
|
begmannen
| 2022-06-27T15:15:07Z | 0 | 0 | null |
[
"license:bsd-3-clause-clear",
"region:us"
] | null | 2022-06-27T15:15:06Z |
---
license: bsd-3-clause-clear
---
|
BukaByaka/opus-mt-ru-en-finetuned-ru-to-en
|
BukaByaka
| 2022-06-27T14:05:53Z | 43 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-26T14:26:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-ru-en-finetuned-ru-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ru-en
metrics:
- name: Bleu
type: bleu
value: 30.4049
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ru-en-finetuned-ru-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ru-en](https://huggingface.co/Helsinki-NLP/opus-mt-ru-en) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4092
- Bleu: 30.4049
- Gen Len: 26.3911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 2.2606 | 1.0 | 94761 | 1.4092 | 30.4049 | 26.3911 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0.post202
- Datasets 2.3.2
- Tokenizers 0.12.1
|
kktoto/tiny_focal_alpah
|
kktoto
| 2022-06-27T13:47:19Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-27T01:31:29Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: tiny_focal_alpah
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny_focal_alpah
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0492
- Precision: 0.6951
- Recall: 0.6796
- F1: 0.6873
- Accuracy: 0.9512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0588 | 1.0 | 5561 | 0.0548 | 0.6801 | 0.6235 | 0.6506 | 0.9453 |
| 0.054 | 2.0 | 11122 | 0.0521 | 0.6850 | 0.6478 | 0.6659 | 0.9476 |
| 0.0525 | 3.0 | 16683 | 0.0509 | 0.6834 | 0.6676 | 0.6754 | 0.9486 |
| 0.0492 | 4.0 | 22244 | 0.0503 | 0.6829 | 0.6754 | 0.6791 | 0.9491 |
| 0.0482 | 5.0 | 27805 | 0.0500 | 0.6917 | 0.6727 | 0.6820 | 0.9501 |
| 0.0471 | 6.0 | 33366 | 0.0491 | 0.7085 | 0.6546 | 0.6805 | 0.9510 |
| 0.0459 | 7.0 | 38927 | 0.0486 | 0.6964 | 0.6746 | 0.6853 | 0.9510 |
| 0.0448 | 8.0 | 44488 | 0.0495 | 0.6922 | 0.6813 | 0.6867 | 0.9509 |
| 0.044 | 9.0 | 50049 | 0.0491 | 0.6961 | 0.6755 | 0.6857 | 0.9511 |
| 0.0433 | 10.0 | 55610 | 0.0492 | 0.6951 | 0.6796 | 0.6873 | 0.9512 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v4
|
gary109
| 2022-06-27T13:34:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-26T14:19:49Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v4
This model is a fine-tuned version of [gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v3](https://huggingface.co/gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v3) on the GARY109/AI_LIGHT_DANCE - ONSET-STEPMANIA2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0298
- Wer: 0.6642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9218 | 1.0 | 188 | 1.0718 | 0.6958 |
| 0.9194 | 2.0 | 376 | 1.0354 | 0.6937 |
| 0.9077 | 3.0 | 564 | 1.0365 | 0.6730 |
| 0.8956 | 4.0 | 752 | 1.0497 | 0.6727 |
| 0.877 | 5.0 | 940 | 1.0299 | 0.6694 |
| 0.8736 | 6.0 | 1128 | 1.0298 | 0.6642 |
| 0.8769 | 7.0 | 1316 | 1.0348 | 0.6584 |
| 0.8571 | 8.0 | 1504 | 1.0689 | 0.6602 |
| 0.8573 | 9.0 | 1692 | 1.0559 | 0.6549 |
| 0.8458 | 10.0 | 1880 | 1.0706 | 0.6588 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
sasha/swin-tiny-finetuned-dogfood
|
sasha
| 2022-06-27T13:26:02Z | 83 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"dataset:lewtun/dog_food",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-26T09:46:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
- lewtun/dog_food
metrics:
- accuracy
model-index:
- name: swin-tiny-finetuned-dogfood
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: lewtun/dog_food
type: lewtun/dog_food
args: lewtun--dog_food
metrics:
- name: Accuracy
type: accuracy
value: 0.988
- task:
type: image-classification
name: Image Classification
dataset:
name: lewtun/dog_food
type: lewtun/dog_food
config: lewtun--dog_food
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9826666666666667
verified: true
- name: Precision Macro
type: precision
value: 0.9820904286553143
verified: true
- name: Precision Micro
type: precision
value: 0.9826666666666667
verified: true
- name: Precision Weighted
type: precision
value: 0.9828416519866903
verified: true
- name: Recall Macro
type: recall
value: 0.9828453314981092
verified: true
- name: Recall Micro
type: recall
value: 0.9826666666666667
verified: true
- name: Recall Weighted
type: recall
value: 0.9826666666666667
verified: true
- name: F1 Macro
type: f1
value: 0.9824101123169301
verified: true
- name: F1 Micro
type: f1
value: 0.9826666666666667
verified: true
- name: F1 Weighted
type: f1
value: 0.9826983433609648
verified: true
- name: loss
type: loss
value: 0.2326570302248001
verified: true
- name: matthews_correlation
type: matthews_correlation
value: 0.974016655798285
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-finetuned-dogfood
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the lewtun/dog_food dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1959
- Accuracy: 0.988
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8198 | 1.0 | 16 | 0.1901 | 0.9822 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
gopalkalpande/t5-small-finetuned-bbc-news-summarization
|
gopalkalpande
| 2022-06-27T13:15:58Z | 5 | 1 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-27T13:12:49Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: gopalkalpande/t5-small-finetuned-bbc-news-summarization
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gopalkalpande/t5-small-finetuned-bbc-news-summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7637
- Validation Loss: 0.3528
- Train Rouge1: 19.4783
- Train Rouge2: 13.2994
- Train Rougel: 17.4791
- Train Rougelsum: 17.6204
- Train Gen Len: 19.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 4e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.001}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 0.7637 | 0.3528 | 19.4783 | 13.2994 | 17.4791 | 17.6204 | 19.0 | 0 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
douwekiela/resnet-18-finetuned-dogfood
|
douwekiela
| 2022-06-27T12:38:50Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"resnet",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"dataset:lewtun/dog_food",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-26T09:42:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
- lewtun/dog_food
metrics:
- accuracy
model-index:
- name: resnet-18-finetuned-dogfood
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: lewtun/dog_food
type: lewtun/dog_food
args: lewtun--dog_food
metrics:
- name: Accuracy
type: accuracy
value: 0.896
- task:
type: image-classification
name: Image Classification
dataset:
name: lewtun/dog_food
type: lewtun/dog_food
config: lewtun--dog_food
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.8466666666666667
verified: true
- name: Precision Macro
type: precision
value: 0.8850127293141284
verified: true
- name: Precision Micro
type: precision
value: 0.8466666666666667
verified: true
- name: Precision Weighted
type: precision
value: 0.8939157698241645
verified: true
- name: Recall Macro
type: recall
value: 0.8555113273379528
verified: true
- name: Recall Micro
type: recall
value: 0.8466666666666667
verified: true
- name: Recall Weighted
type: recall
value: 0.8466666666666667
verified: true
- name: F1 Macro
type: f1
value: 0.8431399312051647
verified: true
- name: F1 Micro
type: f1
value: 0.8466666666666667
verified: true
- name: F1 Weighted
type: f1
value: 0.8430272582865614
verified: true
- name: loss
type: loss
value: 0.3633290231227875
verified: true
- name: matthews_correlation
type: matthews_correlation
value: 0.7973101366252381
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-18-finetuned-dogfood
This model is a fine-tuned version of [microsoft/resnet-18](https://huggingface.co/microsoft/resnet-18) on the lewtun/dog_food dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2991
- Accuracy: 0.896
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.846 | 1.0 | 16 | 0.2662 | 0.9156 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
zezafa/q-Taxi-v3
|
zezafa
| 2022-06-27T11:52:15Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-27T11:52:09Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.38 +/- 2.77
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="zezafa/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Davlan/naija-twitter-sentiment-afriberta-large
|
Davlan
| 2022-06-27T11:50:40Z | 69 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"xlm-roberta",
"text-classification",
"arxiv:2201.08277",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
Hugging Face's logo
---
language:
- hau
- ibo
- pcm
- yor
- multilingual
---
# naija-twitter-sentiment-afriberta-large
## Model description
**naija-twitter-sentiment-afriberta-large** is the first multilingual twitter **sentiment classification** model for four (4) Nigerian languages (Hausa, Igbo, Nigerian Pidgin, and Yorùbá) based on a fine-tuned castorini/afriberta_large large model.
It achieves the **state-of-the-art performance** for the twitter sentiment classification task trained on the [NaijaSenti corpus](https://github.com/hausanlp/NaijaSenti).
The model has been trained to classify tweets into 3 sentiment classes: negative, neutral and positive
Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of 4 Nigerian language datasets obtained from [NaijaSenti](https://github.com/hausanlp/NaijaSenti) dataset.
## Intended uses & limitations
#### How to use
You can use this model with Transformers for Sentiment Classification.
```python
from transformers import AutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
MODEL = "Davlan/naija-twitter-sentiment-afriberta-large"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
text = "I like you"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
id2label = {0:"positive", 1:"neutral", 2:"negative"}
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = id2label[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
#### Limitations and bias
This model is limited by its training dataset and domain i.e Twitter. This may not generalize well for all use cases in different domains.
## Training procedure
This model was trained on a single Nvidia RTX 2080 GPU with recommended hyperparameters from the [original NaijaSenti paper](https://arxiv.org/abs/2201.08277).
## Eval results on Test set (F-score), average over 5 runs.
language|F1-score
-|-
hau |81.2
ibo |80.8
pcm |74.5
yor |80.4
### BibTeX entry and citation info
```
@inproceedings{Muhammad2022NaijaSentiAN,
title={NaijaSenti: A Nigerian Twitter Sentiment Corpus for Multilingual Sentiment Analysis},
author={Shamsuddeen Hassan Muhammad and David Ifeoluwa Adelani and Sebastian Ruder and Ibrahim Said Ahmad and Idris Abdulmumin and Bello Shehu Bello and Monojit Choudhury and Chris C. Emezue and Saheed Salahudeen Abdullahi and Anuoluwapo Aremu and Alipio Jeorge and Pavel B. Brazdil},
year={2022}
}
```
|
Davlan/bert-base-multilingual-cased-masakhaner
|
Davlan
| 2022-06-27T11:50:04Z | 14 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
Hugging Face's logo
---
language:
- ha
- ig
- rw
- lg
- luo
- pcm
- sw
- wo
- yo
- multilingual
datasets:
- masakhaner
---
# bert-base-multilingual-cased-masakhaner
## Model description
**bert-base-multilingual-cased-masakhaner** is the first **Named Entity Recognition** model for 9 African languages (Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) based on a fine-tuned mBERT base model. It achieves the **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/bert-base-multilingual-cased-masakhaner")
model = AutoModelForTokenClassification.from_pretrained("Davlan/bert-base-multilingual-cased-masakhaner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on 9 African NER datasets (Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
## Training procedure
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original MasakhaNER paper](https://arxiv.org/abs/2103.11811) which trained & evaluated the model on MasakhaNER corpus.
## Eval results on Test set (F-score)
language|F1-score
-|-
hau |88.66
ibo |85.72
kin |71.94
lug |81.73
luo |77.39
pcm |88.96
swa |88.23
wol |66.27
yor |80.09
### BibTeX entry and citation info
```
@article{adelani21tacl,
title = {Masakha{NER}: Named Entity Recognition for African Languages},
author = {David Ifeoluwa Adelani and Jade Abbott and Graham Neubig and Daniel D'souza and Julia Kreutzer and Constantine Lignos and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and Israel Abebe Azime and Shamsuddeen Muhammad and Chris Chinenye Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and Jesujoba Alabi and Seid Muhie Yimam and Tajuddeen Gwadabe and Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and Verrah Otiende and Iroro Orife and Davis David and Samba Ngom and Tosin Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and Chiamaka Chukwuneke and Nkiruka Odu and Eric Peter Wairagala and Samuel Oyerinde and Clemencia Siro and Tobius Saul Bateesa and Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and Ayodele Awokoya and Mouhamadane MBOUP and Dibora Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and Thierno Ibrahima DIOP and Abdoulaye Diallo and Adewale Akinfaderin and Tendai Marengereke and Salomey Osei},
journal = {Transactions of the Association for Computational Linguistics (TACL)},
month = {},
url = {https://arxiv.org/abs/2103.11811},
year = {2021}
}
```
|
abhishek/convnext-tiny-finetuned-dogfood
|
abhishek
| 2022-06-27T11:01:31Z | 59 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"dataset:lewtun/dog_food",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-26T09:36:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
- lewtun/dog_food
metrics:
- accuracy
model-index:
- name: convnext-tiny-finetuned-dogfood
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: lewtun/dog_food
type: lewtun/dog_food
args: lewtun--dog_food
metrics:
- name: Accuracy
type: accuracy
value: 0.7253333333333334
- task:
type: image-classification
name: Image Classification
dataset:
name: lewtun/dog_food
type: lewtun/dog_food
config: lewtun--dog_food
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.6866666666666666
verified: true
- name: Precision Macro
type: precision
value: 0.7181484576740136
verified: true
- name: Precision Micro
type: precision
value: 0.6866666666666666
verified: true
- name: Precision Weighted
type: precision
value: 0.7235392474854474
verified: true
- name: Recall Macro
type: recall
value: 0.7006250320552644
verified: true
- name: Recall Micro
type: recall
value: 0.6866666666666666
verified: true
- name: Recall Weighted
type: recall
value: 0.6866666666666666
verified: true
- name: F1 Macro
type: f1
value: 0.6690027379410202
verified: true
- name: F1 Micro
type: f1
value: 0.6866666666666666
verified: true
- name: F1 Weighted
type: f1
value: 0.6647526870157503
verified: true
- name: loss
type: loss
value: 0.9549381732940674
verified: true
- name: matthews_correlation
type: matthews_correlation
value: 0.5737269361889515
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-finetuned-dogfood
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the lewtun/dog_food dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9277
- Accuracy: 0.7253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0681 | 1.0 | 16 | 0.9125 | 0.7422 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Davlan/distilbert-base-multilingual-cased-masakhaner
|
Davlan
| 2022-06-27T10:57:26Z | 27 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"distilbert",
"token-classification",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
Hugging Face's logo
---
language:
- ha
- ig
- rw
- lg
- luo
- pcm
- sw
- wo
- yo
- multilingual
datasets:
- masakhaner
---
# bert-base-multilingual-cased-masakhaner
## Model description
**distilbert-base-multilingual-cased-masakhaner** is the first **Named Entity Recognition** model for 9 African languages (Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) based on a fine-tuned BERT base model. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *distilbert-base-multilingual-cased* model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/distilbert-base-multilingual-cased-masakhaner")
model = AutoModelForTokenClassification.from_pretrained("Davlan/distilbert-base-multilingual-cased-masakhaner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on 9 African NER datasets (Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
## Training procedure
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original MasakhaNER paper](https://arxiv.org/abs/2103.11811) which trained & evaluated the model on MasakhaNER corpus.
## Eval results on Test set (F-score)
language|F1-score
-|-
hau |88.88
ibo |84.87
kin |74.19
lug |78.43
luo |73.32
pcm |87.98
swa |86.20
wol |64.67
yor |78.10
### BibTeX entry and citation info
```
@article{adelani21tacl,
title = {Masakha{NER}: Named Entity Recognition for African Languages},
author = {David Ifeoluwa Adelani and Jade Abbott and Graham Neubig and Daniel D'souza and Julia Kreutzer and Constantine Lignos and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and Israel Abebe Azime and Shamsuddeen Muhammad and Chris Chinenye Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and Jesujoba Alabi and Seid Muhie Yimam and Tajuddeen Gwadabe and Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and Verrah Otiende and Iroro Orife and Davis David and Samba Ngom and Tosin Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and Chiamaka Chukwuneke and Nkiruka Odu and Eric Peter Wairagala and Samuel Oyerinde and Clemencia Siro and Tobius Saul Bateesa and Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and Ayodele Awokoya and Mouhamadane MBOUP and Dibora Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and Thierno Ibrahima DIOP and Abdoulaye Diallo and Adewale Akinfaderin and Tendai Marengereke and Salomey Osei},
journal = {Transactions of the Association for Computational Linguistics (TACL)},
month = {},
url = {https://arxiv.org/abs/2103.11811},
year = {2021}
}
```
|
Rahulrr/language_model_en_de
|
Rahulrr
| 2022-06-27T10:42:46Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"en",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-06-27T10:09:17Z |
---
language:
- en
- de
tags:
- translation
license: apache-2.0
---
### en-de
* source group: English
* target group: German
* OPUS readme: [eng-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-deu/README.md)
* model: transformer-big
* source language(s): eng
* target language(s): deu
* raw source language(s): eng
* raw target language(s): deu
* model: transformer-big
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opusTCv20210807+bt-2021-12-08.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-deu/opusTCv20210807+bt-2021-12-08.zip)
* test set translations: [opusTCv20210807+bt-2021-12-08.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-deu/opusTCv20210807+bt-2021-12-08.test.txt)
* test set scores: [opusTCv20210807+bt-2021-12-08.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-deu/opusTCv20210807+bt-2021-12-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| newssyscomb2009.eng-deu | 24.3 | 0.5462 | 502 | 11271 | 0.993 |
| news-test2008.eng-deu | 24.7 | 0.5412 | 2051 | 47427 | 1.000 |
| newstest2009.eng-deu | 23.6 | 0.5385 | 2525 | 62816 | 0.999 |
| newstest2010.eng-deu | 26.9 | 0.5589 | 2489 | 61511 | 0.966 |
| newstest2011.eng-deu | 24.1 | 0.5364 | 3003 | 72981 | 0.990 |
| newstest2012.eng-deu | 24.6 | 0.5375 | 3003 | 72886 | 0.972 |
| newstest2013.eng-deu | 28.3 | 0.5636 | 3000 | 63737 | 0.988 |
| newstest2014-deen.eng-deu | 30.9 | 0.6084 | 3003 | 62964 | 1.000 |
| newstest2015-ende.eng-deu | 33.2 | 0.6106 | 2169 | 44260 | 1.000 |
| newstest2016-ende.eng-deu | 39.8 | 0.6595 | 2999 | 62670 | 0.993 |
| newstest2017-ende.eng-deu | 32.0 | 0.6047 | 3004 | 61291 | 1.000 |
| newstest2018-ende.eng-deu | 48.8 | 0.7146 | 2998 | 64276 | 1.000 |
| newstest2019-ende.eng-deu | 45.0 | 0.6821 | 1997 | 48969 | 0.995 |
| Tatoeba-test-v2021-08-07.eng-deu | 43.7 | 0.6442 | 10000 | 85728 | 1.000 |
### System Info:
- hf_name: en-de
- source_languages: eng
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'de']
- src_constituents: ('English', {'eng'})
- tgt_constituents: ('German', {'deu'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: eng-deu
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-deu/opusTCv20210807+bt-2021-12-08.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-deu/opusTCv20210807+bt-2021-12-08.test.txt
- src_alpha3: eng
- tgt_alpha3: deu
- chrF2_score: 0.6442
- bleu: 43.7
- src_name: English
- tgt_name: German
- train_date: 2021-12-08 00:00:00
- src_alpha2: en
- tgt_alpha2: de
- prefer_old: False
- short_pair: en-de
- helsinki_git_sha: c4e978d8de47875b482653b423dcfe968979d7d5
- transformers_git_sha: 56b83cf049823ed074a655eceb28f31e2077c6eb
- port_machine: LAPIN4GLQ2G3
- port_time: 2022-06-27-16:10
|
JeremiahZ/reproduce-unsup-roberta-base-avg
|
JeremiahZ
| 2022-06-27T10:19:27Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"generated_from_trainer",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-06-27T08:09:54Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
model-index:
- name: reproduce-unsup-roberta-base-avg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reproduce-unsup-roberta-base-avg
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 512
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Laure996/bert-finetuned-ner
|
Laure996
| 2022-06-27T10:00:55Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-27T09:31:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9329136988570482
- name: Recall
type: recall
value: 0.9478290138000673
- name: F1
type: f1
value: 0.9403122130394858
- name: Accuracy
type: accuracy
value: 0.9855477718255137
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0663
- Precision: 0.9329
- Recall: 0.9478
- F1: 0.9403
- Accuracy: 0.9855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0837 | 1.0 | 1756 | 0.0656 | 0.9151 | 0.9392 | 0.9270 | 0.9834 |
| 0.0388 | 2.0 | 3512 | 0.0619 | 0.9249 | 0.9475 | 0.9361 | 0.9855 |
| 0.0198 | 3.0 | 5268 | 0.0663 | 0.9329 | 0.9478 | 0.9403 | 0.9855 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
danielmantisnlp/autotrain-oms-ner-bi-1044135953
|
danielmantisnlp
| 2022-06-27T09:39:42Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain",
"en",
"dataset:danielmantisnlp/autotrain-data-oms-ner-bi",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-27T09:38:38Z |
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- danielmantisnlp/autotrain-data-oms-ner-bi
co2_eq_emissions: 1.425282392185522
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1044135953
- CO2 Emissions (in grams): 1.425282392185522
## Validation Metrics
- Loss: 0.4587894678115845
- Accuracy: 0.8957797220792589
- Precision: 0.553921568627451
- Recall: 0.6793587174348698
- F1: 0.6102610261026103
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/danielmantisnlp/autotrain-oms-ner-bi-1044135953
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("danielmantisnlp/autotrain-oms-ner-bi-1044135953", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("danielmantisnlp/autotrain-oms-ner-bi-1044135953", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
facebook/convnext-xlarge-224-22k-1k
|
facebook
| 2022-06-27T08:55:36Z | 279 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"convnext",
"image-classification",
"vision",
"dataset:imagenet-21k",
"dataset:imagenet-1k",
"arxiv:2201.03545",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-21k
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ConvNeXT (xlarge-sized model)
ConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt).
Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-xlarge-224-22k-1k")
model = ConvNextForImageClassification.from_pretrained("facebook/convnext-xlarge-224-22k-1k")
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2201-03545,
author = {Zhuang Liu and
Hanzi Mao and
Chao{-}Yuan Wu and
Christoph Feichtenhofer and
Trevor Darrell and
Saining Xie},
title = {A ConvNet for the 2020s},
journal = {CoRR},
volume = {abs/2201.03545},
year = {2022},
url = {https://arxiv.org/abs/2201.03545},
eprinttype = {arXiv},
eprint = {2201.03545},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
Corianas/ppo-SpaceInvadersNoFrameskip-v4.loadbest
|
Corianas
| 2022-06-27T08:25:03Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-27T08:24:26Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 729.50 +/- 289.14
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **PPO** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo ppo --env SpaceInvadersNoFrameskip-v4 -orga Corianas -f logs/
python enjoy.py --algo ppo --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo ppo --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Corianas
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.1'),
('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('learning_rate', 'lin_2.5e-4'),
('n_envs', 8),
('n_epochs', 4),
('n_steps', 128),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('vf_coef', 0.5),
('normalize', False)])
```
|
zyxzyx/autotrain-sum-1042335811
|
zyxzyx
| 2022-06-27T05:15:17Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain",
"zh",
"dataset:zyxzyx/autotrain-data-sum",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-27T01:25:28Z |
---
tags: autotrain
language: zh
widget:
- text: "I love AutoTrain 🤗"
datasets:
- zyxzyx/autotrain-data-sum
co2_eq_emissions: 426.15271368095927
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1042335811
- CO2 Emissions (in grams): 426.15271368095927
## Validation Metrics
- Loss: 1.7748287916183472
- Rouge1: 0.536
- Rouge2: 0.0
- RougeL: 0.536
- RougeLsum: 0.536
- Gen Len: 10.9089
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/zyxzyx/autotrain-sum-1042335811
```
|
jcmc/q-Taxi-v3
|
jcmc
| 2022-06-27T04:21:20Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-27T04:21:13Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.46 +/- 2.70
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jcmc/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
TheRensselaerIDEA/gpt2-large-vaccine-tweet-response
|
TheRensselaerIDEA
| 2022-06-27T03:22:42Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"arxiv:2204.04353",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-27T03:03:38Z |
---
license: mit
---
Base model: [gpt2-large](https://huggingface.co/gpt2-large)
Fine-tuned to generate responses on a dataset of [Vaccine public health tweets](https://github.com/TheRensselaerIDEA/generative-response-modeling). For more information about the dataset, task and training, see [our paper](https://arxiv.org/abs/2204.04353). This checkpoint corresponds to the lowest validation perplexity (2.82 at 2 epochs) seen during training. See Training metrics for Tensorboard logs.
For input format and usage examples, see our [COVID-19 public health tweet response model](https://huggingface.co/TheRensselaerIDEA/gpt2-large-covid-tweet-response).
|
neweasterns/wav2vec2-base-timit-demo-google-colab
|
neweasterns
| 2022-06-27T02:49:23Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-27T00:01:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5206
- Wer: 0.3388
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5597 | 1.0 | 500 | 2.3415 | 0.9991 |
| 0.9759 | 2.01 | 1000 | 0.5556 | 0.5382 |
| 0.4587 | 3.01 | 1500 | 0.7690 | 0.4781 |
| 0.3156 | 4.02 | 2000 | 0.7994 | 0.4412 |
| 0.2272 | 5.02 | 2500 | 0.8948 | 0.4120 |
| 0.1921 | 6.02 | 3000 | 0.7065 | 0.3940 |
| 0.1618 | 7.03 | 3500 | 0.4333 | 0.3855 |
| 0.1483 | 8.03 | 4000 | 0.4232 | 0.3872 |
| 0.156 | 9.04 | 4500 | 0.4172 | 0.3749 |
| 0.1138 | 10.04 | 5000 | 0.4084 | 0.3758 |
| 0.1045 | 11.04 | 5500 | 0.4665 | 0.3623 |
| 0.0908 | 12.05 | 6000 | 0.4416 | 0.3684 |
| 0.0788 | 13.05 | 6500 | 0.4801 | 0.3659 |
| 0.0773 | 14.06 | 7000 | 0.4560 | 0.3583 |
| 0.0684 | 15.06 | 7500 | 0.4878 | 0.3610 |
| 0.0645 | 16.06 | 8000 | 0.4635 | 0.3567 |
| 0.0577 | 17.07 | 8500 | 0.5245 | 0.3548 |
| 0.0547 | 18.07 | 9000 | 0.5265 | 0.3639 |
| 0.0466 | 19.08 | 9500 | 0.5161 | 0.3546 |
| 0.0432 | 20.08 | 10000 | 0.5263 | 0.3558 |
| 0.0414 | 21.08 | 10500 | 0.4874 | 0.3500 |
| 0.0365 | 22.09 | 11000 | 0.5266 | 0.3472 |
| 0.0321 | 23.09 | 11500 | 0.5422 | 0.3458 |
| 0.0325 | 24.1 | 12000 | 0.5201 | 0.3428 |
| 0.0262 | 25.1 | 12500 | 0.5208 | 0.3398 |
| 0.0249 | 26.1 | 13000 | 0.5034 | 0.3429 |
| 0.0262 | 27.11 | 13500 | 0.5055 | 0.3396 |
| 0.0248 | 28.11 | 14000 | 0.5164 | 0.3404 |
| 0.0222 | 29.12 | 14500 | 0.5206 | 0.3388 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
RUCAIBox/mvp-task-dialog
|
RUCAIBox
| 2022-06-27T02:28:25Z | 2 | 3 |
transformers
|
[
"transformers",
"pytorch",
"mvp",
"text-generation",
"text2text-generation",
"en",
"arxiv:2206.12131",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-02T11:53:57Z |
---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
pipeline_tag: text2text-generation
widget:
- text: "Given the task dialog: Belief state [X_SEP] I'm looking for a affordable BBQ restaurant in Dallas for a large group of guest."
example_title: "Example1"
- text: "Given the task dialog: Dialogue action [X_SEP] I'm looking for a affordable BBQ restaurant in Dallas for a large group of guest."
example_title: "Example2"
- text: "Given the task dialog: System response [X_SEP] I'm looking for a affordable BBQ restaurant in Dallas for a large group of guest."
example_title: "Example3"
---
# MVP-task-dialog
The MVP-task-dialog model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MVP-task-dialog is a prompt-based model that MVP is further equipped with prompts pre-trained using labeled task-oriented system datasets. It is a variant (MVP+S) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a Transformer encoder-decoder architecture with layer-wise prompts.
MVP-task-dialog is specially designed for task-oriented tasks, such as MultiWOZ.
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-task-dialog")
>>> inputs = tokenizer(
... "Given the task dialog: System response [X_SEP] I'm looking for a affordable BBQ restaurant in Dallas for a large group of guest.",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['What date and time would you like to go?']
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.