modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 18:33:19
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 18:33:14
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
dodge99/q-Taxi-v3
|
dodge99
| 2022-11-04T23:41:07Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-04T23:41:01Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="dodge99/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
dodge99/q-FrozenLake-v1-4x4-Slippery
|
dodge99
| 2022-11-04T23:27:20Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-04T23:08:03Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.58 +/- 0.49
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dodge99/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Nicktherat/DialoGPT-medium-endella
|
Nicktherat
| 2022-11-04T23:04:23Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-04T08:30:21Z |
---
tags:
- conversational
---
# Let's chat for 4 line
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id, temperature=0.6, repetition_penalty=1.3)
# pretty print last ouput tokens from bot
# print("Endella: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
# Endella DialoGPT Model
|
mariopeng/phoneT5
|
mariopeng
| 2022-11-04T22:53:02Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-17T20:01:55Z |
# Description
Transfer learning on T5 to translate English graphemes to IPA (International Phonetic Alphabet).
- Include "translate to IPA: " as prefix for prompting.
|
jinhybr/OCR-DocVQA-Donut
|
jinhybr
| 2022-11-04T22:23:22Z | 122 | 11 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"donut",
"image-to-text",
"vision",
"document-question-answering",
"arxiv:2111.15664",
"license:mit",
"endpoints_compatible",
"region:us"
] |
document-question-answering
| 2022-11-04T22:11:29Z |
---
license: mit
pipeline_tag: document-question-answering
tags:
- donut
- image-to-text
- vision
widget:
- text: "What is the invoice number?"
src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png"
- text: "What is the purchase amount?"
src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/contract.jpeg"
---
# Donut (base-sized model, fine-tuned on DocVQA)
Donut model fine-tuned on DocVQA. It was introduced in the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewok et al. and first released in [this repository](https://github.com/clovaai/donut).
Disclaimer: The team releasing Donut did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Donut consists of a vision encoder (Swin Transformer) and a text decoder (BART). Given an image, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder.

## Intended uses & limitations
This model is fine-tuned on DocVQA, a document visual question answering dataset.
We refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/donut) which includes code examples.
|
krafczyk/distilbert-base-uncased-finetuned-emotion
|
krafczyk
| 2022-11-04T21:33:28Z | 100 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-04T21:20:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.924884946845687
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2455
- Accuracy: 0.925
- F1: 0.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 63 | 0.8670 | 0.7095 | 0.6491 |
| No log | 2.0 | 126 | 0.3938 | 0.886 | 0.8804 |
| No log | 3.0 | 189 | 0.2669 | 0.921 | 0.9201 |
| 0.6268 | 4.0 | 252 | 0.2455 | 0.925 | 0.9249 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 2.6.1
- Tokenizers 0.10.3
|
jrtec/jrtec-distilroberta-base-mrpc-glue-omar-espejel
|
jrtec
| 2022-11-04T20:31:03Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-04T15:53:58Z |
---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: jrtec-distilroberta-base-mrpc-glue-omar-espejel
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: datasetX
type: glue
config: mrpc
split: train
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8161764705882353
- name: F1
type: f1
value: 0.8747913188647747
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jrtec-distilroberta-base-mrpc-glue-omar-espejel
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4901
- Accuracy: 0.8162
- F1: 0.8748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4845 | 1.09 | 500 | 0.4901 | 0.8162 | 0.8748 |
| 0.3706 | 2.18 | 1000 | 0.6421 | 0.8162 | 0.8691 |
| 0.2003 | 3.27 | 1500 | 0.9711 | 0.8162 | 0.8760 |
| 0.1281 | 4.36 | 2000 | 0.8224 | 0.8480 | 0.8893 |
| 0.0717 | 5.45 | 2500 | 1.1803 | 0.8113 | 0.8511 |
| 0.0344 | 6.54 | 3000 | 1.1759 | 0.8480 | 0.8935 |
| 0.0277 | 7.63 | 3500 | 1.2140 | 0.8456 | 0.8927 |
| 0.0212 | 8.71 | 4000 | 1.0895 | 0.8554 | 0.8974 |
| 0.0071 | 9.8 | 4500 | 1.1849 | 0.8554 | 0.8991 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
platzi/platzi-distilroberta-base-glue-mrpc-eduardo-ag
|
platzi
| 2022-11-04T19:49:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-04T19:25:03Z |
---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: platzi-distilroberta-base-glue-mrpc-eduardo-ag
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: train
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8186274509803921
- name: F1
type: f1
value: 0.8634686346863469
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-glue-mrpc-eduardo-ag
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 0.6614
- Accuracy: 0.8186
- F1: 0.8635
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5185 | 1.09 | 500 | 0.4796 | 0.8431 | 0.8889 |
| 0.3449 | 2.18 | 1000 | 0.6614 | 0.8186 | 0.8635 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
spoiled/roberta-large-neg-tags
|
spoiled
| 2022-11-04T18:49:35Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-04T18:05:23Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-large-neg-tags
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-neg-tags
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0016
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| 0.0143 | 1.0 | 938 | 0.0032 | 0.0 | 0.0 | 0.0 | 0.9995 |
| 0.0033 | 2.0 | 1876 | 0.0017 | 0.0 | 0.0 | 0.0 | 0.9996 |
| 0.0039 | 3.0 | 2814 | 0.0018 | 0.0 | 0.0 | 0.0 | 0.9997 |
| 0.0012 | 4.0 | 3752 | 0.0016 | 0.0 | 0.0 | 0.0 | 0.9997 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.10.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/itsbludood
|
huggingtweets
| 2022-11-04T18:36:50Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-04T18:36:15Z |
---
language: en
thumbnail: http://www.huggingtweets.com/itsbludood/1667587006494/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1543744611742584834/Y_8SQZ8s_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">BluDood</div>
<div style="text-align: center; font-size: 14px;">@itsbludood</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from BluDood.
| Data | BluDood |
| --- | --- |
| Tweets downloaded | 579 |
| Retweets | 126 |
| Short tweets | 62 |
| Tweets kept | 391 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/wux94qs4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @itsbludood's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/w2ic8dfp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/w2ic8dfp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/itsbludood')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
spoiled/roberta-base-neg-tags
|
spoiled
| 2022-11-04T18:16:43Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-04T18:05:11Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-neg-tags
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-neg-tags
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0015
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 235 | 0.0021 | 0.0 | 0.0 | 0.0 | 0.9993 |
| No log | 2.0 | 470 | 0.0015 | 0.0 | 0.0 | 0.0 | 0.9997 |
| 0.0073 | 3.0 | 705 | 0.0015 | 0.0 | 0.0 | 0.0 | 0.9997 |
| 0.0073 | 4.0 | 940 | 0.0015 | 0.0 | 0.0 | 0.0 | 0.9997 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.10.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
mpjan/msmarco-distilbert-base-tas-b-mmarco-pt-100k
|
mpjan
| 2022-11-04T16:39:00Z | 2 | 4 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"pt",
"dataset:unicamp-dl/mmarco",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-03T13:42:19Z |
---
pipeline_tag: sentence-similarity
language:
- 'pt'
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- 'unicamp-dl/mmarco'
---
# mpjan/msmarco-distilbert-base-tas-b-mmarco-pt-100k
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
It is a fine-tuning of [sentence-transformers/msmarco-distilbert-base-tas-b](https://huggingface.co/sentence-transformers/msmarco-distilbert-base-tas-b) on the first 100k triplets of the Portuguese subset in [unicamp-dl/mmarco](https://huggingface.co/datasets/unicamp-dl/mmarco).
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('mpjan/msmarco-distilbert-base-tas-b-mmarco-pt-100k')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{mpjan/msmarco-distilbert-base-tas-b-mmarco-pt-100k}')
model = AutoModel.from_pretrained('{mpjan/msmarco-distilbert-base-tas-b-mmarco-pt-100k}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 6250 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3125,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
troesy/distilBERT-fresh_10epoch
|
troesy
| 2022-11-04T15:57:02Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-04T15:45:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT-fresh_10epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT-fresh_10epoch
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0234
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 174 | 0.1913 | 0.0 | 0.0 | 0.0 | 0.9312 |
| No log | 2.0 | 348 | 0.1431 | 0.0 | 0.0 | 0.0 | 0.9507 |
| 0.2211 | 3.0 | 522 | 0.1053 | 0.0 | 0.0 | 0.0 | 0.9640 |
| 0.2211 | 4.0 | 696 | 0.0770 | 0.0 | 0.0 | 0.0 | 0.9746 |
| 0.2211 | 5.0 | 870 | 0.0581 | 0.0 | 0.0 | 0.0 | 0.9820 |
| 0.0995 | 6.0 | 1044 | 0.0461 | 0.0 | 0.0 | 0.0 | 0.9862 |
| 0.0995 | 7.0 | 1218 | 0.0376 | 0.0 | 0.0 | 0.0 | 0.9886 |
| 0.0995 | 8.0 | 1392 | 0.0290 | 0.0 | 0.0 | 0.0 | 0.9915 |
| 0.054 | 9.0 | 1566 | 0.0238 | 0.0 | 0.0 | 0.0 | 0.9934 |
| 0.054 | 10.0 | 1740 | 0.0234 | 0.0 | 0.0 | 0.0 | 0.9935 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
sd-dreambooth-library/angus-mcbride-style-v2
|
sd-dreambooth-library
| 2022-11-04T15:46:03Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2022-11-04T15:46:01Z |
---
license: mit
---
### angus mcbride style v2 on Stable Diffusion via Dreambooth
#### model by hiero
This your the Stable Diffusion model fine-tuned the angus mcbride style v2 concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **angus mcbride style**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:




































































































|
arjunchandra/ddpm-butterflies-128
|
arjunchandra
| 2022-11-04T15:14:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-04T13:58:06Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/arjunchandra/ddpm-butterflies-128/tensorboard?#scalars)
|
NikitaShu/testPyramids
|
NikitaShu
| 2022-11-04T14:35:57Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-11-04T14:35:49Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: NikitaShu/testPyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
RaulFD-creator/BrigitCNN
|
RaulFD-creator
| 2022-11-04T14:29:16Z | 0 | 0 | null |
[
"license:bsd-3-clause",
"region:us"
] | null | 2022-11-04T14:26:01Z |
---
license: bsd-3-clause
---
BrigitCNN: CNN model trained for detecting protein-metal binding regions.
|
svo2/roberta-finetuned-location
|
svo2
| 2022-11-04T14:03:56Z | 29 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-02T17:48:05Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-location
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-location
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
sd-concepts-library/happy-chaos
|
sd-concepts-library
| 2022-11-04T13:55:04Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-11-04T13:54:52Z |
---
license: mit
---
### Happy Chaos on Stable Diffusion
This is the `<happychaos>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
August06/august
|
August06
| 2022-11-04T13:28:49Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-04T13:28:49Z |
---
license: creativeml-openrail-m
---
|
DeepSpiral/SD_Michael_Jackson_Young_v1
|
DeepSpiral
| 2022-11-04T13:22:47Z | 0 | 3 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-03T00:50:36Z |
---
license: creativeml-openrail-m
---
About:
This Model is Created with the Intention of Preserving the Image of Michael Jackson, the Popular King of Pop Music.
In tribute to his Memory this model was created, hopefully you will find it helpful.
The Model includes exclusively the young version of him at around 24 years old post-illness/surgeries (you may forgive me for not knowing the full history),
This Model was inspired to create by noticing how the original Stable Diffusion Model was unable to fetch and recall the earlier version of Michael Jackson and instead it would fetch the post-surgery ones and closer to his passing.
As a way to also Demonstrate how Popular Long-Lost Figures can be preserved safely and throughought the entirety of their appearance.
With Much Respect I offer you the opportunity to Take a Look at this Model as it was built in the image of a public figure the model shall remain public and free.
Here are some of the Input Images:

Here you can see the Output Images of what to expect using this Model:

How to Use:
To be able to Generate any Image with the Young Version of Michael Jackson all you have to do is to include "MichJack241" in your prompt.
Where to Get the Model:
https://huggingface.co/DeepSpiral/SD_Michael_Jackson_Young_v1/blob/main/SD_Michael%20_Jackson_Young_v1.ckpt
Download the file from the Following Link and import it to your Stable Diffusion as a trained model (depending on the interface you are using).
Use with Caution and Respect,
Enjoy!
___
*If you enjoy my work, please consider supporting me*
<a rel="noopener nofollow" href="https://www.patreon.com/deepspiral?fan_landing=true&view_as=public" class="keychainify-checked"><img alt="Become A Patreon" src="https://badgen.net/badge/become/a%20patron/F96854"></a>
___
|
sirui/bert-base-chinese-finetuned-car_corpus
|
sirui
| 2022-11-04T12:45:04Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-04T07:08:41Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-base-chinese-finetuned-car_corpus
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-car_corpus
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the Car Corpus Database.
It achieves the following results on the evaluation set:
- Loss: 1.5187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.799 | 1.0 | 3776 | 1.5830 |
| 0.7419 | 2.0 | 7552 | 1.4930 |
| 0.7245 | 3.0 | 11328 | 1.5187 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
GuiGel/meddocan-flair-lstm-crf
|
GuiGel
| 2022-11-04T12:37:38Z | 4 | 0 |
flair
|
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"region:us"
] |
token-classification
| 2022-11-04T12:36:13Z |
---
tags:
- flair
- token-classification
- sequence-tagger-model
---
### Demo: How to use in Flair
Requires:
- **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("GuiGel/meddocan-flair-lstm-crf")
# make example sentence
sentence = Sentence("On September 1st George won 1 dollar while watching Game of Thrones.")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
|
roa7n/DNABert_K6_G_quad_1
|
roa7n
| 2022-11-04T12:05:08Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-04T11:33:37Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: DNABert_K6_G_quad_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DNABert_K6_G_quad_1
This model is a fine-tuned version of [armheb/DNA_bert_6](https://huggingface.co/armheb/DNA_bert_6) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0803
- Accuracy: 0.9720
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0926 | 1.0 | 9375 | 0.0803 | 0.9720 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
NilsDamAi/nils-nl-to-rx-pt-v7
|
NilsDamAi
| 2022-11-04T11:50:32Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-11-04T11:37:28Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
model-index:
- name: nils-nl-to-rx-pt-v7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nils-nl-to-rx-pt-v7
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-de-en](https://huggingface.co/Helsinki-NLP/opus-mt-de-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0224
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4389 | 1.0 | 500 | 0.0470 |
| 0.0533 | 2.0 | 1000 | 0.0286 |
| 0.0346 | 3.0 | 1500 | 0.0224 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
harveymannering/q-Taxi-v3
|
harveymannering
| 2022-11-04T11:50:22Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-04T11:50:11Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.70
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="harveymannering/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
troesy/distilBERT-fresh
|
troesy
| 2022-11-04T10:30:15Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-04T10:19:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT-fresh
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT-fresh
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1444
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 174 | 0.1957 | 0.0 | 0.0 | 0.0 | 0.9289 |
| No log | 2.0 | 348 | 0.1591 | 0.0 | 0.0 | 0.0 | 0.9438 |
| 0.2272 | 3.0 | 522 | 0.1444 | 0.0 | 0.0 | 0.0 | 0.9489 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
lvwerra/test
|
lvwerra
| 2022-11-04T10:24:18Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"model-index",
"region:us"
] | null | 2022-11-04T10:20:02Z |
---
model-index:
- name: test
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: ReactionGIF
type: julien-c/reactiongif
metrics:
- type: recall
value: 0.7762102282047272
name: Recall
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Model Examination](#model-examination-optional)
7. [Environmental Impact](#environmental-impact)
8. [Technical Specifications](#technical-specifications-optional)
9. [Citation](#citation-optional)
10. [Glossary](#glossary-optional)
11. [More Information](#more-information-optional)
12. [Model Card Authors](#model-card-authors-optional)
13. [Model Card Contact](#model-card-contact)
14. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Related Models [optional]:** [More Information Needed]
- **Parent Model [optional]:** [More Information Needed]
- **Resources for more information:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations.
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
[More Information Needed]
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
[More Information Needed]
</details>
|
pe65374/xcoa-sbert-base-chinese-nli
|
pe65374
| 2022-11-04T09:29:06Z | 6 | 3 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"zh",
"arxiv:1909.05658",
"arxiv:1908.10084",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-04T09:00:41Z |
---
language: zh
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: apache-2.0
widget:
source_sentence: "那个人很开心"
sentences:
- 那个人非常开心
- 那只猫很开心
- 那个人在吃东西
---
# Chinese Sentence BERT
## Model description
This is the sentence embedding model pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
for easy testing and solving the warning from sentences-transformers (initialized by which), I forked the original repo.
## Training data
[ChineseTextualInference](https://github.com/liuhuanyong/ChineseTextualInference/) is used as training data.
## Training procedure
The model is fine-tuned by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We fine-tune five epochs with a sequence length of 128 on the basis of the pre-trained model [chinese_roberta_L-12_H-768](https://huggingface.co/uer/chinese_roberta_L-12_H-768). At the end of each epoch, the model is saved when the best performance on development set is achieved.
```
python3 finetune/run_classifier_siamese.py --pretrained_model_path models/cluecorpussmall_roberta_base_seq512_model.bin-250000 \
--vocab_path models/google_zh_vocab.txt \
--config_path models/sbert/base_config.json \
--train_path datasets/ChineseTextualInference/train.tsv \
--dev_path datasets/ChineseTextualInference/dev.tsv \
--learning_rate 5e-5 --epochs_num 5 --batch_size 64
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_sbert_from_uer_to_huggingface.py --input_model_path models/finetuned_model.bin \
--output_model_path pytorch_model.bin \
--layers_num 12
```
### BibTeX entry and citation info
```
@article{reimers2019sentence,
title={Sentence-bert: Sentence embeddings using siamese bert-networks},
author={Reimers, Nils and Gurevych, Iryna},
journal={arXiv preprint arXiv:1908.10084},
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
|
neerajp/en_core_web_lg
|
neerajp
| 2022-11-04T08:42:19Z | 7 | 1 |
spacy
|
[
"spacy",
"token-classification",
"en",
"license:mit",
"model-index",
"region:us"
] |
token-classification
| 2022-11-04T08:35:24Z |
---
tags:
- spacy
- token-classification
language:
- en
license: mit
model-index:
- name: en_core_web_lg
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8535469108
- name: NER Recall
type: recall
value: 0.8592748397
- name: NER F Score
type: f_score
value: 0.8564012977
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9734404547
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.9204363007
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.9023174614
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.90444794
---
### Details: https://spacy.io/models/en#en_core_web_lg
English pipeline optimized for CPU. Components: tok2vec, tagger, parser, senter, ner, attribute_ruler, lemmatizer.
| Feature | Description |
| --- | --- |
| **Name** | `en_core_web_lg` |
| **Version** | `3.4.1` |
| **spaCy** | `>=3.4.0,<3.5.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` |
| **Components** | `tok2vec`, `tagger`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` |
| **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) |
| **Sources** | [OntoNotes 5](https://catalog.ldc.upenn.edu/LDC2013T19) (Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston)<br />[ClearNLP Constituent-to-Dependency Conversion](https://github.com/clir/clearnlp-guidelines/blob/master/md/components/dependency_conversion.md) (Emory University)<br />[WordNet 3.0](https://wordnet.princeton.edu/) (Princeton University)<br />[Explosion Vectors (OSCAR 2109 + Wikipedia + OpenSubtitles + WMT News Crawl)](https://github.com/explosion/spacy-vectors-builder) (Explosion) |
| **License** | `MIT` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (113 labels for 3 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, `_SP`, ```` |
| **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` |
| **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 99.93 |
| `TOKEN_P` | 99.57 |
| `TOKEN_R` | 99.58 |
| `TOKEN_F` | 99.57 |
| `TAG_ACC` | 97.34 |
| `SENTS_P` | 91.79 |
| `SENTS_R` | 89.14 |
| `SENTS_F` | 90.44 |
| `DEP_UAS` | 92.04 |
| `DEP_LAS` | 90.23 |
| `ENTS_P` | 85.35 |
| `ENTS_R` | 85.93 |
| `ENTS_F` | 85.64 |
|
xmcmic/Med-KEBERT
|
xmcmic
| 2022-11-04T07:59:47Z | 1,649 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"biomedical",
"en",
"license:openrail",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-04T06:00:17Z |
---
license: openrail
language:
- en
tags:
- bert
- biomedical
---
|
mahdikhojasteh/distilbert-base-uncased-finetuned-emotion
|
mahdikhojasteh
| 2022-11-04T06:38:36Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-03T20:56:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.934
- name: F1
type: f1
value: 0.9342352809170765
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1423
- Accuracy: 0.934
- F1: 0.9342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7568 | 1.0 | 250 | 0.2651 | 0.912 | 0.9099 |
| 0.2008 | 2.0 | 500 | 0.1684 | 0.931 | 0.9316 |
| 0.1302 | 3.0 | 750 | 0.1556 | 0.933 | 0.9334 |
| 0.1046 | 4.0 | 1000 | 0.1466 | 0.933 | 0.9326 |
| 0.087 | 5.0 | 1250 | 0.1423 | 0.934 | 0.9342 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.6.1
- Tokenizers 0.12.1
|
thisisHJLee/wav2vec2-large-xls-r-300m-korean-convsen2
|
thisisHJLee
| 2022-11-04T04:17:14Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-04T01:25:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-korean-convsen2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-korean-convsen2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0094
- Cer: 0.0012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8421 | 1.0 | 1762 | 0.2383 | 0.0591 |
| 0.1721 | 2.0 | 3524 | 0.0309 | 0.0060 |
| 0.065 | 3.0 | 5286 | 0.0094 | 0.0012 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
asahi417/tner-xlm-roberta-base-ontonotes5
|
asahi417
| 2022-11-04T03:24:37Z | 17,233 | 5 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"en",
"arxiv:2209.12616",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
---
# Model Card for XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER.
# Model Details
## Model Description
XLM-RoBERTa finetuned on NER.
- **Developed by:** Asahi Ushio
- **Shared by [Optional]:** Hugging Face
- **Model type:** Token Classification
- **Language(s) (NLP):** en
- **License:** More information needed
- **Related Models:** XLM-RoBERTa
- **Parent Model:** XLM-RoBERTa
- **Resources for more information:**
- [GitHub Repo](https://github.com/asahi417/tner)
- [Associated Paper](https://arxiv.org/abs/2209.12616)
- [Space](https://huggingface.co/spaces/akdeniz27/turkish-named-entity-recognition)
# Uses
## Direct Use
Token Classification
## Downstream Use [Optional]
This model can be used in conjunction with the [tner library](https://github.com/asahi417/tner).
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations.
# Training Details
## Training Data
An NER dataset contains a sequence of tokens and tags for each split (usually `train`/`validation`/`test`),
```python
{
'train': {
'tokens': [
['@paulwalk', 'It', "'s", 'the', 'view', 'from', 'where', 'I', "'m", 'living', 'for', 'two', 'weeks', '.', 'Empire', 'State', 'Building', '=', 'ESB', '.', 'Pretty', 'bad', 'storm', 'here', 'last', 'evening', '.'],
['From', 'Green', 'Newsfeed', ':', 'AHFA', 'extends', 'deadline', 'for', 'Sage', 'Award', 'to', 'Nov', '.', '5', 'http://tinyurl.com/24agj38'], ...
],
'tags': [
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ...
]
},
'validation': ...,
'test': ...,
}
```
with a dictionary to map a label to its index (`label2id`) as below.
```python
{"O": 0, "B-ORG": 1, "B-MISC": 2, "B-PER": 3, "I-PER": 4, "B-LOC": 5, "I-ORG": 6, "I-MISC": 7, "I-LOC": 8}
```
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
**Layer_norm_eps:** 1e-05,
**Num_attention_heads:** 12,
**Num_hidden_layers:** 12,
**Vocab_size:** 250002
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
See [dataset card](https://github.com/asahi417/tner/blob/master/DATASET_CARD.md) for full dataset lists
### Factors
More information needed
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
**BibTeX:**
```
@inproceedings{ushio-camacho-collados-2021-ner,
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.eacl-demos.7",
pages = "53--62",
}
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Asahi Ushio in collaboration with Ezi Ozoani and the Hugging Face team.
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-ontonotes5")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-ontonotes5")
```
</details>
|
0xkrm/q-Taxi-v3
|
0xkrm
| 2022-11-04T03:01:31Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-04T03:01:29Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: -102.93 +/- 209.24
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="0xkrm/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
lvkaokao/bert-base-uncased-teacher-preparation-pretrain
|
lvkaokao
| 2022-11-04T02:50:34Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2022-09-27T06:13:02Z |
---
license: other
---
```python
#!/bin/bash
# Apache v2 license
# Copyright (C) 2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# Teacher Preparation
# Notes:
# Auto mixed precision can be used by adding --fp16
# Distributed training can be used with the torch.distributed.lauch app
TEACHER_PATH=./bert-base-uncased-teacher-preparation-pretrain
OUTPUT_DIR=$TEACHER_PATH
DATA_CACHE_DIR=/root/kaokao/Model-Compression-Research-Package/examples/transformers/language-modeling/wikipedia_processed_for_pretrain
python -m torch.distributed.launch \
--nproc_per_node=8 \
../../examples/transformers/language-modeling/run_mlm.py \
--model_name_or_path bert-base-uncased \
--datasets_name_config wikipedia:20200501.en \
--data_process_type segment_pair_nsp \
--dataset_cache_dir $DATA_CACHE_DIR \
--do_train \
--learning_rate 5e-5 \
--max_steps 100000 \
--warmup_ratio 0.01 \
--weight_decay 0.01 \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 4 \
--logging_steps 10 \
--save_steps 5000 \
--save_total_limit 2 \
--output_dir $OUTPUT_DIR \
--run_name pofa-teacher-prepare-pretrain
```
|
theojolliffe/model-1-reverse-bart
|
theojolliffe
| 2022-11-04T02:25:54Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-03T23:08:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: model-1-reverse-bart
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model-1-reverse-bart
This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3347
- Rouge1: 95.4467
- Rouge2: 91.7522
- Rougel: 95.448
- Rougelsum: 95.4377
- Gen Len: 15.5478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:------:|:---------:|:-------:|
| 0.0744 | 1.0 | 28039 | 0.3347 | 95.4467 | 91.7522 | 95.448 | 95.4377 | 15.5478 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
fake4325634/chkn
|
fake4325634
| 2022-11-04T02:18:20Z | 0 | 2 | null |
[
"license:mit",
"region:us"
] | null | 2022-11-03T23:31:04Z |
---
license: mit
---
Trained on amateur photographs of chickens from Reddit. Include "chkn" in a prompt to use.






|
Fputin/putinclown
|
Fputin
| 2022-11-04T01:56:20Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2022-11-03T23:50:05Z |
---
license: openrail
---
Decided to make a Dreambooth model today of Putin Caricatures and cartoons that are banned in Russia because F Putin.
Prompt is "putinclown"
Spread the love like he spreads hate! Have fun!
|
g30rv17ys/customdbmodelv6
|
g30rv17ys
| 2022-11-04T01:35:01Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:geevegeorge/customdbv6",
"license:apache-2.0",
"diffusers:AudioDiffusionPipeline",
"region:us"
] | null | 2022-11-03T20:19:12Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: geevegeorge/customdbv6
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# customdbmodelv6
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `geevegeorge/customdbv6` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- gradient_accumulation_steps: 8
- optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08
- lr_scheduler: cosine
- lr_warmup_steps: 500
- ema_inv_gamma: 1.0
- ema_inv_gamma: 0.75
- ema_inv_gamma: 0.9999
- mixed_precision: no
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/customdbmodelv6/tensorboard?#scalars)
|
ashish23993/t5-small-finetuned-xsum-AB
|
ashish23993
| 2022-11-04T00:28:42Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-03T07:48:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum-AB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum-AB
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8942
- Rouge1: 13.835
- Rouge2: 4.4916
- Rougel: 10.5998
- Rougelsum: 12.3225
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:---------:|:-------:|
| 2.9182 | 1.0 | 625 | 2.8942 | 13.835 | 4.4916 | 10.5998 | 12.3225 | 19.0 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.13.0+cpu
- Datasets 2.6.1
- Tokenizers 0.13.1
|
bouim/hubert-large-arabic-darija
|
bouim
| 2022-11-03T23:27:08Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"automatic-speech-recognition",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-03T22:11:06Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
model-index:
- name: hubert-large-arabic-darija
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-large-arabic-darija
This model is a fine-tuned version of [asafaya/hubert-large-arabic](https://huggingface.co/asafaya/hubert-large-arabic) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.6.2.dev0
- Tokenizers 0.13.1
|
pablorocg/Retinal_disease_model_v2
|
pablorocg
| 2022-11-03T21:37:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-11-03T21:30:30Z |
---
title: Retinal Disease
emoji: 🐠
colorFrom: pink
colorTo: blue
sdk: gradio
sdk_version: 2.9.4
app_file: app.py
pinned: false
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
|
drandran/asmonbald
|
drandran
| 2022-11-03T20:37:46Z | 0 | 4 | null |
[
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:unknown",
"region:us"
] |
text-to-image
| 2022-11-03T20:22:33Z |
---
license: unknown
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
---
# Asmongold model.ckpt for Stable Diffusion v1-5 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. I've trained using Dreambooth 20 images of twitch streamer Asmongold for the purpose of text-to-image illustration generation using Stable Diffusion.
Feel free to download, use and share the model as you like. To give the Ai the trigger to generate an illustration based on the trained Asmongold images, make sure to use the tag "asmonbald" in your prompts.
Example:
a detailed portrait photo of a man
vs
a detailed portrait photo of asmonbald
---
|
huggingtweets/kristincarolw
|
huggingtweets
| 2022-11-03T20:36:19Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-03T20:22:10Z |
---
language: en
thumbnail: http://www.huggingtweets.com/kristincarolw/1667507776021/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/558354319633039361/IWd6dt31_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pizza Hut</div>
<div style="text-align: center; font-size: 14px;">@kristincarolw</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pizza Hut.
| Data | Pizza Hut |
| --- | --- |
| Tweets downloaded | 2923 |
| Retweets | 527 |
| Short tweets | 413 |
| Tweets kept | 1983 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/999xba5o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kristincarolw's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2osco534) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2osco534/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/kristincarolw')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
impira/layoutlm-document-classifier
|
impira
| 2022-11-03T20:03:22Z | 166 | 11 |
transformers
|
[
"transformers",
"pytorch",
"layoutlm",
"text-classification",
"document-classification",
"pdf",
"invoices",
"en",
"arxiv:1912.13318",
"arxiv:1910.09700",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-20T05:19:18Z |
---
language: en
license: cc-by-nc-sa-4.0
tags:
- layoutlm
- document-classification
- pdf
- invoices
---
# Model Card for LayoutLM for Document Classification
# Model Details
## Model Description
This is a fine-tuned version of the multi-modal LayoutLM model for the task of classification on documents.
- **Developed by:** Impira team
- **Shared by [Optional]:** Hugging Face
- **Model type:** Text Classification
- **Language(s) (NLP):** en
- **License:** cc-by-nc-sa-4.0
- **Related Models:** layoutlm
- **Parent Model:** More information needed
- **Resources for more information:**
- [Associated Paper](https://arxiv.org/abs/1912.13318)
- [Blog Post](https://www.impira.com/blog/introducing-instant-invoices)
# Uses
## Direct Use
Text Classification
## Downstream Use [Optional]
More information needed
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
More information needed
## Training Procedure
More information needed
### Preprocessing
More information needed
### Speeds, Sizes, Times
Num_attention_head: 12
Num_hidden_layer:12,
Vocab_size: 30522
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
More information needed
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
Transformers version: 4.4.0.dev0
# Citation
**BibTeX:**
More information needed}
**APA:**
More information needed
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Impira team in collaboration with Ezi Ozoani and the Hugging Face team.
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("impira/layoutlm-document-classifier")
model = AutoModelForSequenceClassification.from_pretrained("impira/layoutlm-document-classifier")
```
</details>
|
jlartey10/wav2vec2-large-xls-r-300m-tr-colab
|
jlartey10
| 2022-11-03T18:52:53Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-10-26T19:40:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-tr-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice
type: common_voice
config: ga-IE
split: train+validation
args: ga-IE
metrics:
- name: Wer
type: wer
value: 0.593329432416618
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tr-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1786
- Wer: 0.5933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3421 | 14.81 | 400 | 1.1795 | 0.5922 |
| 0.113 | 29.63 | 800 | 1.1786 | 0.5933 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/deltazulu14
|
huggingtweets
| 2022-11-03T18:48:19Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-03T18:46:16Z |
---
language: en
thumbnail: http://www.huggingtweets.com/deltazulu14/1667501296205/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1569374676933033984/NSveEXrv_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Delta Zulu</div>
<div style="text-align: center; font-size: 14px;">@deltazulu14</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Delta Zulu.
| Data | Delta Zulu |
| --- | --- |
| Tweets downloaded | 881 |
| Retweets | 108 |
| Short tweets | 150 |
| Tweets kept | 623 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/8h87mrlb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deltazulu14's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/mwjzatl4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/mwjzatl4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deltazulu14')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
santiagoahl/vit_model
|
santiagoahl
| 2022-11-03T18:20:04Z | 29 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-03T17:40:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beans
model-index:
- name: vit_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/jldevezas
|
huggingtweets
| 2022-11-03T17:49:29Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-03T17:36:45Z |
---
language: en
thumbnail: http://www.huggingtweets.com/jldevezas/1667497736714/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1352291023867834370/OcubRjdf_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">José Devezas</div>
<div style="text-align: center; font-size: 14px;">@jldevezas</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from José Devezas.
| Data | José Devezas |
| --- | --- |
| Tweets downloaded | 1690 |
| Retweets | 439 |
| Short tweets | 106 |
| Tweets kept | 1145 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/27g8vb39/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jldevezas's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/16q8rwg7) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/16q8rwg7/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jldevezas')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
vipz3/xlm-roberta-base-finetuned-panx-de
|
vipz3
| 2022-11-03T17:03:41Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-02T16:30:56Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: train
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
rexwang8/qilin-lit-6b
|
rexwang8
| 2022-11-03T16:58:09Z | 30 | 6 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"text generation",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-23T02:10:01Z |
---
language: en
thumbnail: "https://i.ibb.co/HBqvBFY/mountain-xianxia-chinese-scenic-landscape-craggy-mist-action-scene-pagoda-s-2336925014-1.png"
tags:
- text generation
- pytorch
license: mit
---
# Qilin-lit-6b Description
Most updated version is V1.1.0 which is fine-tuned on 550 MB of webnovels found on the NovelUpdates website. (https://www.novelupdates.com/)
The style is SFW and whimsical, excelling at telling fantasy stories, especially webnovels.
## Downstream Uses
This model can be used for entertainment purposes and as a creative writing assistant for fiction writers.
## Usage with Kobold AI Colab (Easiest)
GPU -> https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/GPU.ipynb
TPU -> https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/TPU.ipynb
Replace the drop-down value with "rexwang8/qilin-lit-6b" and select that model.
## Usage with Kobold AI Local
Load at AI/load a model from it's directory. Model name is "rexwang8/qilin-lit-6b". If you get a config.json not found error, reload the program and give it some time to find your GPUs.
## Example Code
```
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained('rexwang8/qilin-lit-6b')
tokenizer = AutoTokenizer.from_pretrained('rexwang8/lit-6b')
prompt = '''I had eyes but couldn't see Mount Tai!'''
input_ids = tokenizer.encode(prompt, return_tensors='pt')
output = model.generate(input_ids, do_sample=True, temperature=1.0, top_p=0.9, repetition_penalty=1.2, max_length=len(input_ids[0])+100, pad_token_id=tokenizer.eos_token_id)
generated_text = tokenizer.decode(output[0])
print(generated_text)
```
---
## Qilin-lit-6b (V1.1.0)
Fine-tuned version of EleutherAI/gpt-j-6B (https://huggingface.co/EleutherAI/gpt-j-6B) on Coreweave's infrastructure (<https://www.coreweave.com/>) using an A40 over ~80 hours.
3150 steps, 1 epoch trained on 550 MB of primarily Xianxia genre Webnovels. (Translated to English)
---
## Team members and Acknowledgements
Rex Wang - Author
Coreweave - Computational materials
With help from:
Wes Brown, Anthony Mercurio
---
## Version History
1.1.0 - 550 MB Dataset(34 books) 3150 steps (no reordering, no sampling)
1.0.0 - 100 MB Dataset(3 books) 300 steps (no reordering, no sampling)
|
nerijs/sorrentino-diffusion
|
nerijs
| 2022-11-03T16:19:55Z | 0 | 5 | null |
[
"region:us"
] | null | 2022-11-02T18:50:17Z |
# Sorrentino Diffusion
Stable Diffusion model trained on images by the artists Andrea Sorrentino
<div style="display: flex; flex-direction: row; flex-wrap: wrap">
<img src="https://s3.amazonaws.com/moonup/production/uploads/1667417959158-6303f37c3926de1f7ec42d3e.png" width="256">
<img src="https://s3.amazonaws.com/moonup/production/uploads/1667417959179-6303f37c3926de1f7ec42d3e.png" width="256">
<img src="https://s3.amazonaws.com/moonup/production/uploads/1667417959240-6303f37c3926de1f7ec42d3e.png" width="256">
<img src="https://s3.amazonaws.com/moonup/production/uploads/1667417959181-6303f37c3926de1f7ec42d3e.png" width="256">
<img src="https://s3.amazonaws.com/moonup/production/uploads/1667417959118-6303f37c3926de1f7ec42d3e.png" width="256">
<img src="https://s3.amazonaws.com/moonup/production/uploads/1667417959047-6303f37c3926de1f7ec42d3e.png" width="256">
</div>
## How to use
- Download the model and use it on your desired UI (Tested on AUTOMATIC1111's) Currently only .ckpt version is supported
- Trigger the style in your prompt with the **andreasorrentino** token, look at the next section for more examples
## Versions
- **v1**: Trained on 25 images over 3000 Dreambooth steps. 1000, 1500, 2000, 2500 and 3000 steps checkpoints available to download
We currently provide multiple checkpoints at different steps where you can compare results. v1 is only an experiment with a low quality dataset, results indicate the model might be overfitted. v2 will improve on dataset quality and quantity.
## Examples
**andreasorrentino style, a picture of a shiba inu**
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 2207496243, Size: 512x512, Comparing v1 checkpoints

<hr />
**drawing of a porsche, andreasorrentino style**
Steps: 20, Sampler: Euler a, CFG scale: 7-15, Seed: 1734310449, Size: 512x512, andrea-sorrentino-v1_step_3000.ckpt

## Tips
- Use different ways to trigger the style: andreasorrentino style, YOUR_PROMPT | YOUR_PROMPT in the style of andreasorrentino | YOUR_PROMPT, andreasorrentino style
https://twitter.com/nerijs
|
LiveEvil/autotrain-testtextexists-1966366048
|
LiveEvil
| 2022-11-03T15:56:11Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"text-regression",
"en",
"dataset:orange6996/autotrain-data-testtextexists",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | null | 2022-11-03T15:55:43Z |
---
tags:
- autotrain
- text-regression
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- orange6996/autotrain-data-testtextexists
co2_eq_emissions:
emissions: 0.3550338626114656
---
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 1966366048
- CO2 Emissions (in grams): 0.3550
## Validation Metrics
- Loss: 4911.982
- MSE: 4911.981
- MAE: 68.106
- R2: -16.962
- RMSE: 70.086
- Explained Variance: -0.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/orange6996/autotrain-testtextexists-1966366048
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("orange6996/autotrain-testtextexists-1966366048", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("orange6996/autotrain-testtextexists-1966366048", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Janst1000/buntesgelaber
|
Janst1000
| 2022-11-03T15:49:48Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-03T15:32:09Z |
## Setup
To use this model please clone the following GitHub repository https://github.com/Janst1000/buntesgelaber
## How this model was trained
This model was trained on https://github.com/bundestag/gesetze. I wrote a simple script that takes all of the text inside of the repository and puts it all into a single text file. Then I trained the model using the HuggingFace tutorial https://huggingface.co/blog/how-to-train
|
fxmarty/tiny-bert-sst2-distilled-clone
|
fxmarty
| 2022-11-03T15:29:12Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-03T14:37:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: tiny-bert-sst2-distilled
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8325688073394495
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-bert-sst2-distilled
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7305
- Accuracy: 0.8326
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007199555649276667
- train_batch_size: 1024
- eval_batch_size: 1024
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.77 | 1.0 | 66 | 1.6939 | 0.8165 |
| 0.729 | 2.0 | 132 | 1.5090 | 0.8326 |
| 0.5242 | 3.0 | 198 | 1.5369 | 0.8257 |
| 0.4017 | 4.0 | 264 | 1.7025 | 0.8326 |
| 0.327 | 5.0 | 330 | 1.6743 | 0.8245 |
| 0.2749 | 6.0 | 396 | 1.7305 | 0.8337 |
| 0.2521 | 7.0 | 462 | 1.7305 | 0.8326 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
gogzy/t5-base-finetuned_renre_item1
|
gogzy
| 2022-11-03T15:27:48Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-03T15:24:49Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: gogzy/t5-base-finetuned_renre_item1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gogzy/t5-base-finetuned_renre_item1
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.5613
- Validation Loss: 6.0177
- Train Rouge1: 9.4862
- Train Rouge2: 6.3745
- Train Rougel: 7.9051
- Train Rougelsum: 9.4862
- Train Gen Len: 19.0
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 13.9387 | 10.3276 | 7.1429 | 1.6 | 4.7619 | 5.5556 | 19.0 | 0 |
| 12.7511 | 9.4693 | 8.7302 | 4.8 | 7.1429 | 7.9365 | 19.0 | 1 |
| 11.3785 | 8.4321 | 8.7302 | 4.8 | 7.1429 | 7.9365 | 19.0 | 2 |
| 9.9856 | 7.2054 | 8.7302 | 4.8 | 7.1429 | 7.9365 | 19.0 | 3 |
| 8.5613 | 6.0177 | 9.4862 | 6.3745 | 7.9051 | 9.4862 | 19.0 | 4 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.1
|
TTian/bert-mlm-feedback
|
TTian
| 2022-11-03T15:20:42Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-03T14:59:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-mlm-feedback
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-mlm-feedback
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2248 | 1.0 | 350 | 1.5091 |
| 2.0629 | 2.0 | 700 | 1.2582 |
| 2.0031 | 3.0 | 1050 | 1.4637 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
MS-Go/autotrain-bart_normaldata-1976866012
|
MS-Go
| 2022-11-03T15:20:24Z | 100 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"summarization",
"unk",
"dataset:MS-Go/autotrain-data-bart_normaldata",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-11-03T14:57:15Z |
---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- MS-Go/autotrain-data-bart_normaldata
co2_eq_emissions:
emissions: 41.152874017879256
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1976866012
- CO2 Emissions (in grams): 41.1529
## Validation Metrics
- Loss: 2.837
- Rouge1: 34.318
- Rouge2: 6.495
- RougeL: 18.460
- RougeLsum: 30.998
- Gen Len: 141.027
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/MS-Go/autotrain-bart_normaldata-1976866012
```
|
popaqy/pegasus-base-qag-bg-finetuned-grammar-bg
|
popaqy
| 2022-11-03T15:05:58Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-03T14:37:36Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: pegasus-base-qag-bg-finetuned-grammar-bg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-base-qag-bg-finetuned-grammar-bg
This model is a fine-tuned version of [rmihaylov/pegasus-base-qag-bg](https://huggingface.co/rmihaylov/pegasus-base-qag-bg) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4405 | 1.0 | 375 | 1.2704 |
| 1.2396 | 2.0 | 750 | 1.2544 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/cosm1cgrandma
|
huggingtweets
| 2022-11-03T14:51:34Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-03T14:50:19Z |
---
language: en
thumbnail: http://www.huggingtweets.com/cosm1cgrandma/1667487071319/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1491563915746201600/Sl5-btX4_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">cosmic grandma</div>
<div style="text-align: center; font-size: 14px;">@cosm1cgrandma</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from cosmic grandma.
| Data | cosmic grandma |
| --- | --- |
| Tweets downloaded | 2995 |
| Retweets | 1342 |
| Short tweets | 318 |
| Tweets kept | 1335 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2w5yrh2i/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cosm1cgrandma's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2c5z2l0f) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2c5z2l0f/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cosm1cgrandma')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
iliemihai/mt5-base-romanian-diacritics
|
iliemihai
| 2022-11-03T14:51:27Z | 98 | 4 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"romanian",
"seq2seq",
"t5",
"ro",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-02T15:09:37Z |
---
language: ro
inference: true
license: apache-2.0
tags:
- romanian
- seq2seq
- t5
---
This is the fine-tuned [mt5-base-romanian](https://huggingface.co/dumitrescustefan/mt5-base-romanian) base model (**390M** parameters).
The model was fine-tuned on the [romanian diacritics dataset](https://huggingface.co/datasets/dumitrescustefan/diacritic) for 150k steps with a batch of size 8. The encoder sequence length is 256 and the decoder sequence length is also 256. It was trained with the following [scripts](https://github.com/iliemihai/t5x_diacritics).
### How to load the fine-tuned mt5x model
```python
from transformers import MT5ForConditionalGeneration, T5Tokenizer
model = MT5ForConditionalGeneration.from_pretrained('iliemihai/mt5-base-romanian-diacritics')
tokenizer = T5Tokenizer.from_pretrained('iliemihai/mt5-base-romanian-diacritics')
input_text = "A inceput sa ii taie un fir de par, iar fata sta in fata, tine camasa de in in mana si canta nota SI."
inputs = tokenizer(input_text, max_length=256, truncation=True, return_tensors="pt")
outputs = model.generate(input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"])
output = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(output) # this will print "A început să îi taie un fir de păr, iar fata stă în față, ține cămașa de in în mână și cântă nota SI"
```
### Evaluation
Evaluation will be done soon [here]()
### Acknowledgements
We'd like to thank [TPU Research Cloud](https://sites.research.google/trc/about/) for providing the TPUv3 cores we used to train these models!
### Authors
Yours truly,
_[Stefan Dumitrescu](https://github.com/dumitrescustefan), [Mihai Ilie](https://github.com/iliemihai) and [Per Egil Kummervold](https://huggingface.co/north)_
|
royam0820/ddpm-butterflies-128
|
royam0820
| 2022-11-03T14:19:07Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-03T13:03:24Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/royam0820/ddpm-butterflies-128/tensorboard?#scalars)
|
DogeAI/finetuning-sentiment-model-3000-samples
|
DogeAI
| 2022-11-03T13:35:42Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-03T04:49:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
- name: F1
type: f1
value: 0.8692810457516339
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3163
- Accuracy: 0.8667
- F1: 0.8693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
calicxy/wav2vec2-base-finetuned-ks
|
calicxy
| 2022-11-03T13:16:03Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:audiofolder",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-11-03T11:20:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1135
- Accuracy: 0.3403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2574 | 0.99 | 40 | 2.1881 | 0.2917 |
| 2.1367 | 1.99 | 80 | 2.1433 | 0.2917 |
| 2.1535 | 2.99 | 120 | 2.1255 | 0.2917 |
| 2.159 | 3.99 | 160 | 2.1135 | 0.3403 |
| 2.1341 | 4.99 | 200 | 2.1027 | 0.3403 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
jayantapaul888/twitter-data-pysentimiento-robertuito-sentiment-finetuned-memes
|
jayantapaul888
| 2022-11-03T13:05:05Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-03T11:29:43Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: twitter-data-pysentimiento-robertuito-sentiment-finetuned-memes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-data-pysentimiento-robertuito-sentiment-finetuned-memes
This model is a fine-tuned version of [pysentimiento/robertuito-sentiment-analysis](https://huggingface.co/pysentimiento/robertuito-sentiment-analysis) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2563
- Accuracy: 0.9262
- Precision: 0.9271
- Recall: 0.9262
- F1: 0.9263
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.3641 | 1.0 | 1762 | 0.3197 | 0.8999 | 0.9001 | 0.8999 | 0.8995 |
| 0.272 | 2.0 | 3524 | 0.2723 | 0.9171 | 0.9181 | 0.9171 | 0.9171 |
| 0.2451 | 3.0 | 5286 | 0.2633 | 0.9224 | 0.9226 | 0.9224 | 0.9223 |
| 0.2084 | 4.0 | 7048 | 0.2518 | 0.9256 | 0.9270 | 0.9256 | 0.9257 |
| 0.199 | 5.0 | 8810 | 0.2545 | 0.9268 | 0.9277 | 0.9268 | 0.9269 |
| 0.1926 | 6.0 | 10572 | 0.2563 | 0.9262 | 0.9271 | 0.9262 | 0.9263 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Gaborandi/Clinical-Longformer-SurgicalCardiothoracic
|
Gaborandi
| 2022-11-03T12:43:10Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"longformer",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-03T02:53:02Z |
---
tags:
- generated_from_trainer
model-index:
- name: Clinical-Longformer-SurgicalCardiothoracic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Clinical-Longformer-SurgicalCardiothoracic
This model is a fine-tuned version of [yikuan8/Clinical-Longformer](https://huggingface.co/yikuan8/Clinical-Longformer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1515 | 1.1133 |
| No log | 2.0 | 3030 | 1.0476 |
| No log | 3.0 | 4545 | 1.0114 |
| No log | 4.0 | 6060 | 0.9958 |
| No log | 5.0 | 7575 | 0.9928 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.8.0
- Datasets 2.2.2
- Tokenizers 0.11.6
|
DavidCollier/distilbert-base-uncased-finetuned-imdb
|
DavidCollier
| 2022-11-03T12:38:37Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-03T12:31:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
seanfarrell/set_fit_experiment
|
seanfarrell
| 2022-11-03T12:26:38Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-02T15:08:15Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2040 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 2040,
"warmup_steps": 204,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Enverrr/ViT_exp_1
|
Enverrr
| 2022-11-03T11:24:15Z | 56 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-03T11:13:44Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: ViT_exp_1
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9732142686843872
---
# ViT_exp_1
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### cat

#### dog

#### donkey

#### lion

#### monkey

|
paulhindemith/test-zeroshot
|
paulhindemith
| 2022-11-03T10:48:01Z | 48 | 0 |
transformers
|
[
"transformers",
"pytorch",
"test-zeroshot",
"zero-shot-classification",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-11-03T06:45:41Z |
---
pipeline_tag: zero-shot-classification
widget:
- text: "Jens Peter Hansen kommer fra Danmark"
---
|
HPL/distilbert-base-uncased-finetuned-emotion
|
HPL
| 2022-11-03T08:29:46Z | 106 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-02T07:03:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9405
- name: F1
type: f1
value: 0.9408676491029256
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1465
- Accuracy: 0.9405
- F1: 0.9409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8341 | 1.0 | 250 | 0.2766 | 0.9105 | 0.9088 |
| 0.2181 | 2.0 | 500 | 0.1831 | 0.9305 | 0.9308 |
| 0.141 | 3.0 | 750 | 0.1607 | 0.93 | 0.9305 |
| 0.1102 | 4.0 | 1000 | 0.1509 | 0.935 | 0.9344 |
| 0.0908 | 5.0 | 1250 | 0.1465 | 0.9405 | 0.9409 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
takizawa/distilbert-base-uncased-finetuned-emotion
|
takizawa
| 2022-11-03T06:30:42Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-03T06:17:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.924985636202576
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2251
- Accuracy: 0.925
- F1: 0.9250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8481 | 1.0 | 250 | 0.3248 | 0.907 | 0.9028 |
| 0.2595 | 2.0 | 500 | 0.2251 | 0.925 | 0.9250 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7
|
DrishtiSharma
| 2022-11-03T05:42:08Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"hi",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- hi
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-large-xls-r-300m-hi-CV7
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: hi
metrics:
- name: Test WER
type: wer
value: 35.31946325249292
- name: Test CER
type: cer
value: 11.310803379493076
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: vot
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi-CV7
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6588
- Wer: 0.2987
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7 --dataset mozilla-foundation/common_voice_7_0 --config hi --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
NA
### Training hyperparameters
The following hyperparameters were used during training:
#
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 12.809 | 1.36 | 200 | 6.2066 | 1.0 |
| 4.3402 | 2.72 | 400 | 3.5184 | 1.0 |
| 3.4365 | 4.08 | 600 | 3.2779 | 1.0 |
| 1.8643 | 5.44 | 800 | 0.9875 | 0.6270 |
| 0.7504 | 6.8 | 1000 | 0.6382 | 0.4666 |
| 0.5328 | 8.16 | 1200 | 0.6075 | 0.4505 |
| 0.4364 | 9.52 | 1400 | 0.5785 | 0.4215 |
| 0.3777 | 10.88 | 1600 | 0.6279 | 0.4227 |
| 0.3374 | 12.24 | 1800 | 0.6536 | 0.4192 |
| 0.3236 | 13.6 | 2000 | 0.5911 | 0.4047 |
| 0.2877 | 14.96 | 2200 | 0.5955 | 0.4097 |
| 0.2643 | 16.33 | 2400 | 0.5923 | 0.3744 |
| 0.2421 | 17.68 | 2600 | 0.6307 | 0.3814 |
| 0.2218 | 19.05 | 2800 | 0.6036 | 0.3764 |
| 0.2046 | 20.41 | 3000 | 0.6286 | 0.3797 |
| 0.191 | 21.77 | 3200 | 0.6517 | 0.3889 |
| 0.1856 | 23.13 | 3400 | 0.6193 | 0.3661 |
| 0.1721 | 24.49 | 3600 | 0.7034 | 0.3727 |
| 0.1656 | 25.85 | 3800 | 0.6293 | 0.3591 |
| 0.1532 | 27.21 | 4000 | 0.6075 | 0.3611 |
| 0.1507 | 28.57 | 4200 | 0.6313 | 0.3565 |
| 0.1381 | 29.93 | 4400 | 0.6564 | 0.3578 |
| 0.1359 | 31.29 | 4600 | 0.6724 | 0.3543 |
| 0.1248 | 32.65 | 4800 | 0.6789 | 0.3512 |
| 0.1198 | 34.01 | 5000 | 0.6442 | 0.3539 |
| 0.1125 | 35.37 | 5200 | 0.6676 | 0.3419 |
| 0.1036 | 36.73 | 5400 | 0.7017 | 0.3435 |
| 0.0982 | 38.09 | 5600 | 0.6828 | 0.3319 |
| 0.0971 | 39.45 | 5800 | 0.6112 | 0.3351 |
| 0.0968 | 40.81 | 6000 | 0.6424 | 0.3252 |
| 0.0893 | 42.18 | 6200 | 0.6707 | 0.3304 |
| 0.0878 | 43.54 | 6400 | 0.6432 | 0.3236 |
| 0.0827 | 44.89 | 6600 | 0.6696 | 0.3240 |
| 0.0788 | 46.26 | 6800 | 0.6564 | 0.3180 |
| 0.0753 | 47.62 | 7000 | 0.6574 | 0.3130 |
| 0.0674 | 48.98 | 7200 | 0.6698 | 0.3175 |
| 0.0676 | 50.34 | 7400 | 0.6441 | 0.3142 |
| 0.0626 | 51.7 | 7600 | 0.6642 | 0.3121 |
| 0.0617 | 53.06 | 7800 | 0.6615 | 0.3117 |
| 0.0599 | 54.42 | 8000 | 0.6634 | 0.3059 |
| 0.0538 | 55.78 | 8200 | 0.6464 | 0.3033 |
| 0.0571 | 57.14 | 8400 | 0.6503 | 0.3018 |
| 0.0491 | 58.5 | 8600 | 0.6625 | 0.3025 |
| 0.0511 | 59.86 | 8800 | 0.6588 | 0.2987 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
g30rv17ys/customdbmodelv4
|
g30rv17ys
| 2022-11-03T04:52:22Z | 8 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:geevegeorge/customdbv3",
"license:apache-2.0",
"diffusers:AudioDiffusionPipeline",
"region:us"
] | null | 2022-11-02T18:58:10Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: geevegeorge/customdbv3
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# customdbmodelv4
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `geevegeorge/customdbv3` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- gradient_accumulation_steps: 8
- optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08
- lr_scheduler: cosine
- lr_warmup_steps: 500
- ema_inv_gamma: 1.0
- ema_inv_gamma: 0.75
- ema_inv_gamma: 0.9999
- mixed_precision: no
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/customdbmodelv4/tensorboard?#scalars)
|
lIlBrother/ko-barTNumText
|
lIlBrother
| 2022-11-03T04:36:26Z | 13 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"ko",
"dataset:aihub",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-31T01:13:19Z |
---
language:
- ko # Example: fr
license: apache-2.0 # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses
library_name: transformers # Optional. Example: keras or any library from https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Libraries.ts
tags:
- text2text-generation # Example: audio
datasets:
- aihub # Example: common_voice. Use dataset id from https://hf.co/datasets
metrics:
- bleu # Example: wer. Use metric id from https://hf.co/metrics
- rouge
# Optional. Add this if you want to encode your eval results in a structured way.
model-index:
- name: ko-barTNumText
results:
- task:
type: text2text-generation # Required. Example: automatic-speech-recognition
name: text2text-generation # Optional. Example: Speech Recognition
metrics:
- type: bleu # Required. Example: wer. Use metric id from https://hf.co/metrics
value: 0.9313276940897475 # Required. Example: 20.90
name: eval_bleu # Optional. Example: Test WER
verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
- type: rouge1 # Required. Example: wer. Use metric id from https://hf.co/metrics
value: 0.9607081256861959 # Required. Example: 20.90
name: eval_rouge1 # Optional. Example: Test WER
verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
- type: rouge2 # Required. Example: wer. Use metric id from https://hf.co/metrics
value: 0.9394649136169404 # Required. Example: 20.90
name: eval_rouge2 # Optional. Example: Test WER
verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
- type: rougeL # Required. Example: wer. Use metric id from https://hf.co/metrics
value: 0.9605735834651536 # Required. Example: 20.90
name: eval_rougeL # Optional. Example: Test WER
verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
- type: rougeLsum # Required. Example: wer. Use metric id from https://hf.co/metrics
value: 0.9605993760190767 # Required. Example: 20.90
name: eval_rougeLsum # Optional. Example: Test WER
verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
---
# ko-barTNumText(TNT Model🧨): Try Number To Korean Reading(숫자를 한글로 바꾸는 모델)
## Table of Contents
- [ko-barTNumText(TNT Model🧨): Try Number To Korean Reading(숫자를 한글로 바꾸는 모델)](#ko-bartnumtexttnt-model-try-number-to-korean-reading숫자를-한글로-바꾸는-모델)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [Uses](#uses)
- [Evaluation](#evaluation)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
- **Model Description:**
뭔가 찾아봐도 모델이나 알고리즘이 딱히 없어서 만들어본 모델입니다. <br />
BartForConditionalGeneration Fine-Tuning Model For Number To Korean <br />
BartForConditionalGeneration으로 파인튜닝한, 숫자를 한글로 변환하는 Task 입니다. <br />
- Dataset use [Korea aihub](https://aihub.or.kr/aihubdata/data/list.do?currMenu=115&topMenu=100&srchDataRealmCode=REALM002&srchDataTy=DATA004) <br />
I can't open my fine-tuning datasets for my private issue <br />
데이터셋은 Korea aihub에서 받아서 사용하였으며, 파인튜닝에 사용된 모든 데이터를 사정상 공개해드릴 수는 없습니다. <br />
- Korea aihub data is ONLY permit to Korean!!!!!!! <br />
aihub에서 데이터를 받으실 분은 한국인일 것이므로, 한글로만 작성합니다. <br />
정확히는 음성전사를 철자전사로 번역하는 형태로 학습된 모델입니다. (ETRI 전사기준) <br />
- In case, ten million, some people use 10 million or some people use 10000000, so this model is crucial for training datasets <br />
천만을 1000만 혹은 10000000으로 쓸 수도 있기에, Training Datasets에 따라 결과는 상이할 수 있습니다. <br />
- **수관형사와 수 의존명사의 띄어쓰기에 따라 결과가 확연히 달라질 수 있습니다. (쉰살, 쉰 살 -> 쉰살, 50살)** https://eretz2.tistory.com/34 <br />
일단은 기준을 잡고 치우치게 학습시키기엔 어떻게 사용될지 몰라, 학습 데이터 분포에 맡기도록 했습니다. (쉰 살이 더 많을까 쉰살이 더 많을까!?)
- **Developed by:** Yoo SungHyun(https://github.com/YooSungHyun)
- **Language(s):** Korean
- **License:** apache-2.0
- **Parent Model:** See the [kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) for more information about the pre-trained base model.
## Uses
Want see more detail follow this URL [KoGPT_num_converter](https://github.com/ddobokki/KoGPT_num_converter) <br /> and see `bart_inference.py` and `bart_train.py`
## Evaluation
Just using `evaluate-metric/bleu` and `evaluate-metric/rouge` in huggingface `evaluate` library <br />
[Training wanDB URL](https://wandb.ai/bart_tadev/BartForConditionalGeneration/runs/326xgytt?workspace=user-bart_tadev)
## How to Get Started With the Model
```python
from transformers.pipelines import Text2TextGenerationPipeline
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
texts = ["그러게 누가 6시까지 술을 마시래?"]
tokenizer = AutoTokenizer.from_pretrained("lIlBrother/ko-barTNumText")
model = AutoModelForSeq2SeqLM.from_pretrained("lIlBrother/ko-barTNumText")
seq2seqlm_pipeline = Text2TextGenerationPipeline(model=model, tokenizer=tokenizer)
kwargs = {
"min_length": 0,
"max_length": 1206,
"num_beams": 100,
"do_sample": False,
"num_beam_groups": 1,
}
pred = seq2seqlm_pipeline(texts, **kwargs)
print(pred)
# 그러게 누가 여섯 시까지 술을 마시래?
```
|
GItaf/gpt2-gpt2-mc-weight0.25-epoch2-new-nosharing
|
GItaf
| 2022-11-03T03:40:17Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-03T03:30:40Z |
---
tags:
- generated_from_trainer
model-index:
- name: gpt2-gpt2-mc-weight0.25-epoch2-new-nosharing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-gpt2-mc-weight0.25-epoch2-new-nosharing
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3672
- Cls loss: 1.4634
- Lm loss: 4.0012
- Cls Accuracy: 0.6121
- Cls F1: 0.6023
- Cls Precision: 0.6288
- Cls Recall: 0.6121
- Perplexity: 54.66
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Cls loss | Cls Accuracy | Cls F1 | Cls Precision | Cls Recall | Lm loss | Perplexity | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:------------:|:------:|:-------------:|:----------:|:-------:|:----------:|:---------------:|
| 4.6729 | 1.0 | 3470 | 1.5425 | 0.5689 | 0.5448 | 0.5732 | 0.5689 | 4.0392 | 56.78 | 4.4248 |
| 4.3854 | 2.0 | 6940 | 1.4634 | 0.6121 | 0.6023 | 0.6288 | 0.6121 | 4.0012 | 54.66 | 4.3672 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
GItaf/gpt2-gpt2-mc-weight0.25-epoch2-new
|
GItaf
| 2022-11-03T03:39:06Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-03T03:25:54Z |
---
tags:
- generated_from_trainer
model-index:
- name: gpt2-gpt2-mc-weight0.25-epoch2-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-gpt2-mc-weight0.25-epoch2-new
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3629
- Cls loss: 1.4483
- Lm loss: 4.0006
- Cls Accuracy: 0.6023
- Cls F1: 0.5950
- Cls Precision: 0.6174
- Cls Recall: 0.6023
- Perplexity: 54.63
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Cls loss | Cls Accuracy | Cls F1 | Cls Precision | Cls Recall | Lm loss | Perplexity | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:------------:|:------:|:-------------:|:----------:|:-------:|:----------:|:---------------:|
| 4.674 | 1.0 | 3470 | 1.5961 | 0.5487 | 0.5279 | 0.5643 | 0.5487 | 4.0380 | 56.71 | 4.4372 |
| 4.3809 | 2.0 | 6940 | 1.4483 | 0.6023 | 0.5950 | 0.6174 | 0.6023 | 4.0006 | 54.63 | 4.3629 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
lilouuch/t5-small-finetuned-xsum_epoch4
|
lilouuch
| 2022-11-03T03:32:33Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-02T12:18:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum_epoch4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum_epoch4
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4245
- Rouge1: 29.5204
- Rouge2: 8.4931
- Rougel: 22.9705
- Rougelsum: 23.0872
- Gen Len: 18.8221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.7175 | 1.0 | 7620 | 2.4899 | 28.585 | 7.7626 | 22.1314 | 22.2424 | 18.8174 |
| 2.6605 | 2.0 | 15240 | 2.4486 | 29.2362 | 8.2481 | 22.7049 | 22.8227 | 18.8273 |
| 2.6368 | 3.0 | 22860 | 2.4303 | 29.4228 | 8.4312 | 22.8991 | 23.0192 | 18.8262 |
| 2.6284 | 4.0 | 30480 | 2.4245 | 29.5204 | 8.4931 | 22.9705 | 23.0872 | 18.8221 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Gaborandi/Bio_ClinicalBERT-SurgicalCardiothoracic
|
Gaborandi
| 2022-11-03T01:57:32Z | 36 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-02T17:05:08Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Bio_ClinicalBERT-SurgicalCardiothoracic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bio_ClinicalBERT-SurgicalCardiothoracic
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 1.0 | 13144 | 0.9092 |
| No log | 2.0 | 26288 | 0.8575 |
| No log | 3.0 | 39432 | 0.8417 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.8.0
- Datasets 2.2.2
- Tokenizers 0.11.6
|
alkerek/kerek
|
alkerek
| 2022-11-03T00:22:54Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-03T00:22:54Z |
---
license: creativeml-openrail-m
---
|
Bingsu/ko_BBPE_tokenizer_bert2
|
Bingsu
| 2022-11-03T00:20:59Z | 0 | 0 | null |
[
"bert",
"tokenizer only",
"ko",
"license:mit",
"region:us"
] | null | 2022-11-03T00:19:01Z |
---
language:
- ko
tags:
- bert
- tokenizer only
license:
- mit
---
## 라이브러리 버전
- transformers: 4.23.1
- datasets: 2.6.1
- tokenizers: 0.13.1
[Bingsu/ko_BBPE_tokenizer_roberta](https://huggingface.co/Bingsu/ko_BBPE_tokenizer_roberta)에서 unicode normalizer를 `nfc`로, post-processor를 BertProcessing로 변경하고 토크나이저 클래스를 `BertTokenizerFast`로 변경한 것입니다.
|
sd-concepts-library/hoi4-leaders
|
sd-concepts-library
| 2022-11-02T23:37:15Z | 0 | 6 | null |
[
"license:mit",
"region:us"
] | null | 2022-11-02T23:37:11Z |
---
license: mit
---
### HOI4 Leaders on Stable Diffusion
This is the `<HOI4-Leader>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





































































































































|
alicekwak/TN-final-multi-qa-mpnet-base-dot-v1
|
alicekwak
| 2022-11-02T23:06:04Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-02T23:05:53Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# alicekwak/TN-final-multi-qa-mpnet-base-dot-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('alicekwak/TN-final-multi-qa-mpnet-base-dot-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('alicekwak/TN-final-multi-qa-mpnet-base-dot-v1')
model = AutoModel.from_pretrained('alicekwak/TN-final-multi-qa-mpnet-base-dot-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=alicekwak/TN-final-multi-qa-mpnet-base-dot-v1)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 675 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
bglick13/ddpm-butterflies-128
|
bglick13
| 2022-11-02T22:37:02Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-01T15:31:43Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/bglick13/ddpm-butterflies-128/tensorboard?#scalars)
|
bayartsogt/wav2vec2-xls-r-300m-mn-demo
|
bayartsogt
| 2022-11-02T22:06:29Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-02T19:53:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-mn-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-mn-demo
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9633
- Wer: 0.5586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.5564 | 6.77 | 400 | 2.8622 | 0.9998 |
| 1.0005 | 13.55 | 800 | 0.9428 | 0.6614 |
| 0.3018 | 20.34 | 1200 | 0.9611 | 0.5860 |
| 0.1918 | 27.12 | 1600 | 0.9633 | 0.5586 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
osanseviero/test_sentence_transformers3
|
osanseviero
| 2022-11-02T21:57:44Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:s2orc",
"dataset:ms_marco",
"dataset:wiki_atomic_edits",
"dataset:snli",
"dataset:multi_nli",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/coco_captions",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/QQP",
"dataset:yahoo_answers_topics",
"arxiv:1908.10084",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-02T21:57:39Z |
---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- flax-sentence-embeddings/stackexchange_xml
- s2orc
- ms_marco
- wiki_atomic_edits
- snli
- multi_nli
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/flickr30k-captions
- embedding-data/coco_captions
- embedding-data/sentence-compression
- embedding-data/QQP
- yahoo_answers_topics
---
# sentence-transformers/paraphrase-MiniLM-L3-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-MiniLM-L3-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-MiniLM-L3-v2')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-MiniLM-L3-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-MiniLM-L3-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
```
|
osanseviero/test_sentence_transformers2
|
osanseviero
| 2022-11-02T21:53:25Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:s2orc",
"dataset:ms_marco",
"dataset:wiki_atomic_edits",
"dataset:snli",
"dataset:multi_nli",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/coco_captions",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/QQP",
"dataset:yahoo_answers_topics",
"arxiv:1908.10084",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-02T21:53:19Z |
---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- flax-sentence-embeddings/stackexchange_xml
- s2orc
- ms_marco
- wiki_atomic_edits
- snli
- multi_nli
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/flickr30k-captions
- embedding-data/coco_captions
- embedding-data/sentence-compression
- embedding-data/QQP
- yahoo_answers_topics
---
# sentence-transformers/paraphrase-MiniLM-L3-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-MiniLM-L3-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-MiniLM-L3-v2')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-MiniLM-L3-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-MiniLM-L3-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
```
|
Nerfgun3/NekoModel
|
Nerfgun3
| 2022-11-02T21:44:45Z | 0 | 16 | null |
[
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-11-02T09:00:49Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Neko Model
This model was trained on 100 Neko Girl Pictures
## Usage
To use this model you have to download the file aswell as drop it into the "\stable-diffusion-webui\models\Stable-diffusion" folder
Token: ```neko```
If it is to strong just add [] around it.
Trained until 10000 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/MpyeqMe.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/wxzvHrL.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/MuUnJY5.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/XeDC8xA.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/XmLTrEl.png width=100% height=100%/></td>
</tr>
</table>
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
The-Fanta/distilbert-base-uncased-finetuned-cola
|
The-Fanta
| 2022-11-02T21:41:51Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-02T21:41:06Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: The-Fanta/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# The-Fanta/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5162
- Validation Loss: 0.4561
- Train Matthews Correlation: 0.4968
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5162 | 0.4561 | 0.4968 | 0 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.10.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/t4tclussy
|
huggingtweets
| 2022-11-02T21:39:21Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-02T21:36:47Z |
---
language: en
thumbnail: http://www.huggingtweets.com/t4tclussy/1667425132769/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1576359096504258563/vRp_mOiv_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">🎃🦇🪦spooky rat🎃🦇🪦</div>
<div style="text-align: center; font-size: 14px;">@t4tclussy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 🎃🦇🪦spooky rat🎃🦇🪦.
| Data | 🎃🦇🪦spooky rat🎃🦇🪦 |
| --- | --- |
| Tweets downloaded | 3119 |
| Retweets | 1463 |
| Short tweets | 268 |
| Tweets kept | 1388 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1rt9srp7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @t4tclussy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/18rnibwz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/18rnibwz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/t4tclussy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
dodge99/ppo-LunarLander-v2
|
dodge99
| 2022-11-02T21:28:02Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-02T21:27:33Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 207.28 +/- 90.34
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SanDiegoDude/WheresWaldoStyle
|
SanDiegoDude
| 2022-11-02T20:52:47Z | 0 | 7 | null |
[
"license:mit",
"region:us"
] | null | 2022-11-02T19:42:33Z |
---
license: mit
---
Hello! This is a 14,000 step trained model based on the famous Where's Waldo / Where's Wally art style. (I'm American so I named the style Waldo, if you're familiar with Wally instead, my apologies!)
The keyword to invoke the style is "Wheres Waldo style" and I've found it works best when you use it in conjunction with real world locations if you want to ground it at least a little bit in reality. If you really want the Wally/Waldo look, be sure to include "Bright primary colors" in your prompt, and add things like "pastel colors" and "washed out colors" to your negative prompts. You can also control the amount of "Waldo-ness" by de-emphasizing the style in your prompt.
For example,
"(Wheres Waldo Style:1.0), A busy street in New York City, (bright primary colors:1.2)" results in the following image:

While
"(Wheres Waldo Style:0.6), A busy street in New York City, (bright primary colors:1.2) brings in some details about New York city like the subway entrance that you won't find at full strength style.

One thing to keep in mind, if you try to just spit out a 2048 x 2048 image, it's not going to give you waldo, it's going to give you a monstrosity like this:

I've found the sweet spot for this model to be in the 512 x 512 to about a max of 640 x 960. Much beyond that and it starts to create big blobs like the example above. It does take pretty well to inpainting though, so if you create something interesting at 640 x 960, throw it in inpaint and start drawing in fun details (you may have to reeeeally de-emphasize the style in your inpaints to get it to give you what you want, just a heads up)
Finally, one thing I've found that really helps give it the "waldo look" is using Aesthetics. I like to run an Aesthetics pass at a strength of .20 for 30 steps. It helps prevent the really washed out colors and adds the stripes that are so prevalent in Wally/Waldo comics. I've included the Waldo2.pt file if you want to download it and use it, it was trained on the same high quality images I used for the checkpoint dreambooth training.
Here is a screenshot of my config I use for generating these images:

I hope you have fun with this! Sadly it won't actually generate a Waldo/Wally into the image (at least not one that you can generate on demand), but if you're going to all the trouble to inpaint a proper Waldo/Wally scene, you can do some quick post work to add Waldo/Wally in there somewhere! =)
Here are sample images using this model:











|
jayantapaul888/twitter-data-distilbert-base-uncased-sentiment-finetuned-memes
|
jayantapaul888
| 2022-11-02T20:16:58Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-31T14:50:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: twitter-data-distilbert-base-uncased-sentiment-finetuned-memes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-data-distilbert-base-uncased-sentiment-finetuned-memes
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2474
- Accuracy: 0.9282
- Precision: 0.9290
- Recall: 0.9282
- F1: 0.9282
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.3623 | 1.0 | 1762 | 0.3171 | 0.8986 | 0.8995 | 0.8986 | 0.8981 |
| 0.271 | 2.0 | 3524 | 0.2665 | 0.9176 | 0.9182 | 0.9176 | 0.9173 |
| 0.2386 | 3.0 | 5286 | 0.2499 | 0.9237 | 0.9254 | 0.9237 | 0.9239 |
| 0.2136 | 4.0 | 7048 | 0.2494 | 0.9259 | 0.9263 | 0.9259 | 0.9257 |
| 0.1974 | 5.0 | 8810 | 0.2454 | 0.9278 | 0.9288 | 0.9278 | 0.9278 |
| 0.182 | 6.0 | 10572 | 0.2474 | 0.9282 | 0.9290 | 0.9282 | 0.9282 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
flamesbob/Sasu-Model
|
flamesbob
| 2022-11-02T19:07:13Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2022-10-30T20:19:43Z |
---
license: openrail
---
Token class word for this model is `sasu` using this will draw attention to the training data that was used and help increase the quality of the image.
License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
|
flamesbob/rimu_model
|
flamesbob
| 2022-11-02T19:06:41Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-10-31T01:13:03Z |
---
license: creativeml-openrail-m
---
Token class word for this model is `rimu` using this will draw attention to the training data that was used and help increase the quality of the image.
License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
|
huggingtweets/chaddraven-nickichlol-saware7
|
huggingtweets
| 2022-11-02T18:51:08Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-02T18:44:25Z |
---
language: en
thumbnail: http://www.huggingtweets.com/chaddraven-nickichlol-saware7/1667415027467/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1542731743328862210/g9ZgqOmK_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1587675160072491008/Vykq9cOY_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1550159744396042241/RT8UyMgT_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">nick & Chad & SW7</div>
<div style="text-align: center; font-size: 14px;">@chaddraven-nickichlol-saware7</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from nick & Chad & SW7.
| Data | nick | Chad | SW7 |
| --- | --- | --- | --- |
| Tweets downloaded | 3231 | 3174 | 3037 |
| Retweets | 215 | 504 | 161 |
| Short tweets | 663 | 1094 | 660 |
| Tweets kept | 2353 | 1576 | 2216 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/22ya4o85/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chaddraven-nickichlol-saware7's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3m24xig1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3m24xig1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/chaddraven-nickichlol-saware7')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
L-oenai/layoutxlm-finetuned-xfund-pt
|
L-oenai
| 2022-11-02T18:16:11Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"dataset:xfun",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-02T17:09:50Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- xfun
model-index:
- name: layoutxlm-finetuned-xfund-pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutxlm-finetuned-xfund-pt
This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on the xfun dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.10.0+cu111
- Datasets 2.6.1
- Tokenizers 0.13.1
|
allenai/scirepeval_adapters_prx
|
allenai
| 2022-11-02T17:29:49Z | 9 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:scirepeval/proximity",
"bert",
"dataset:allenai/scirepeval",
"region:us"
] | null | 2022-10-28T00:08:19Z |
---
tags:
- adapterhub:scirepeval/proximity
- adapter-transformers
- bert
datasets:
- allenai/scirepeval
---
# Adapter `allenai/scirepeval_adapters_prx` for malteos/scincl
An [adapter](https://adapterhub.ml) for the `malteos/scincl` model that was trained on the [scirepeval/proximity](https://adapterhub.ml/explore/scirepeval/proximity/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("malteos/scincl")
adapter_name = model.load_adapter("allenai/scirepeval_adapters_prx", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
AndreIchiro/swinv2-finetuned-eurosat
|
AndreIchiro
| 2022-11-02T17:27:53Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swinv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-01T00:41:29Z |
---
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: swinv2-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window16-256](https://huggingface.co/microsoft/swinv2-base-patch4-window16-256) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.