modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 06:31:37
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 06:31:07
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
LarryAIDraw/InoriV1
|
LarryAIDraw
| 2023-07-03T20:26:42Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-03T20:19:50Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/12139?modelVersionId=14324
|
espnet/brianyan918_mustc-v2_en-de_st_ctc_conformer_asrinit_v2_raw_en_de_bpe_tc4000_sp
|
espnet
| 2023-07-03T20:12:36Z | 2 | 0 | null |
[
"region:us"
] | null | 2023-07-03T20:09:40Z |
- Download model and run inference:
`./run.sh --skip_data_prep false --skip_train true --download_model espnet/brianyan918_mustc-v2_en-de_st_ctc_conformer_asrinit_v2_raw_en_de_bpe_tc4000_sp --inference_config conf/tuning/decode_st_conformer_ctc0.3.yaml`
|dataset|score|verbose_score|
|---|---|---|
|decode_st_conformer_ctc0.3_st_model_valid.acc.ave_10best/tst-COMMON.en-de|28.6|61.8/35.1/22.2/14.5 (BP = 0.988 ratio = 0.988 hyp_len = 51068 ref_len = 51699)|
|
computroidai/COMPUTROID
|
computroidai
| 2023-07-03T20:12:36Z | 0 | 0 | null |
[
"en",
"hi",
"dataset:Open-Orca/OpenOrca",
"license:mit",
"region:us"
] | null | 2023-07-03T20:10:55Z |
---
license: mit
datasets:
- Open-Orca/OpenOrca
language:
- en
- hi
---
|
espnet/brianyan918_mustc-v2_en-de_st_md_conformer_asrinit_v3-2_raw_en_de_bpe_tc4000_sp
|
espnet
| 2023-07-03T20:08:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-03T20:04:22Z |
- Download model and run inference:
`./run.sh --skip_data_prep false --skip_train true --download_model espnet/brianyan918_mustc-v2_en-de_st_md_conformer_asrinit_v3-2_raw_en_de_bpe_tc4000_sp --inference_config conf/tuning/decode_st_md.yaml`
|dataset|score|verbose_score|
|---|---|---|
|decode_st_md_st_model_valid.acc.ave_10best/tst-COMMON.en-de|27.6|61.6/34.6/21.9/14.4 (BP = 0.964 ratio = 0.965 hyp_len = 49877 ref_len = 51699)|
|
andres-gv/cmi-topics-2
|
andres-gv
| 2023-07-03T20:08:21Z | 4 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2023-07-03T19:54:05Z |
---
pipeline_tag: text-classification
library_name: bertopic
---
|
alphaduriendur/ner-deBERTa-v3-large-conll2003
|
alphaduriendur
| 2023-07-03T20:07:39Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-03T06:16:03Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ner-deBERTa-v3-large-conll2003
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: test
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9235068110373734
- name: Recall
type: recall
value: 0.9362606232294618
- name: F1
type: f1
value: 0.9298399859328293
- name: Accuracy
type: accuracy
value: 0.9853128028426833
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner-deBERTa-v3-large-conll2003
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1546
- Precision: 0.9235
- Recall: 0.9363
- F1: 0.9298
- Accuracy: 0.9853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0077 | 1.0 | 878 | 0.1280 | 0.9096 | 0.9265 | 0.9180 | 0.9832 |
| 0.0084 | 2.0 | 1756 | 0.1380 | 0.9167 | 0.9299 | 0.9233 | 0.9844 |
| 0.0037 | 3.0 | 2634 | 0.1495 | 0.9221 | 0.9347 | 0.9283 | 0.9850 |
| 0.0015 | 4.0 | 3512 | 0.1517 | 0.9215 | 0.9347 | 0.9280 | 0.9849 |
| 0.0006 | 5.0 | 4390 | 0.1546 | 0.9235 | 0.9363 | 0.9298 | 0.9853 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Nidhiiii/my_awesome_model
|
Nidhiiii
| 2023-07-03T19:54:17Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-03T19:13:59Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Nidhiiii/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Nidhiiii/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2520
- Validation Loss: 0.1938
- Train Accuracy: 0.9234
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2520 | 0.1938 | 0.9234 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
andersonbcdefg/flan_t5_80m-finetune-samsum-adapter
|
andersonbcdefg
| 2023-07-03T19:51:34Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-03T19:51:33Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
mrizalf7/xlm-r-qa-small-squad
|
mrizalf7
| 2023-07-03T19:50:09Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-03T18:15:49Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-r-qa-small-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-r-qa-small-squad
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2394 | 1.0 | 5437 | 1.9701 |
| 0.9683 | 2.0 | 10874 | 1.9800 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
nikitakapitan/distilbert-base-uncased-finetuned-emotion
|
nikitakapitan
| 2023-07-03T19:49:07Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-03T19:42:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9235743183364048
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2113
- Accuracy: 0.9235
- F1: 0.9236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8004 | 1.0 | 250 | 0.2959 | 0.9135 | 0.9124 |
| 0.2377 | 2.0 | 500 | 0.2113 | 0.9235 | 0.9236 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
PrakhAI/HelloWorld
|
PrakhAI
| 2023-07-03T19:22:35Z | 0 | 0 | null |
[
"dataset:mnist",
"license:gpl-3.0",
"region:us"
] | null | 2023-07-02T01:34:55Z |
---
license: gpl-3.0
datasets:
- mnist
---
Flax handwritten digit (MNIST) classification model trained using https://colab.research.google.com/github/google/flax/blob/main/docs/getting_started.ipynb
|
practical-dreamer/rpgpt-13b-lora
|
practical-dreamer
| 2023-07-03T19:08:32Z | 0 | 2 | null |
[
"dataset:practicaldreamer/RPGPT_PublicDomain-ShareGPT",
"region:us"
] | null | 2023-07-03T17:17:03Z |
---
datasets:
- practicaldreamer/RPGPT_PublicDomain-ShareGPT
---
## Introduction
This is my first attempt at training a model for long form character interaction using asterisk roleplay format.
There are plenty of general instruction/answer models but most focus on single responses between an ai and a human.
My goal for this project is to more closely align the training data with CHARACTER interactions for roleplay.
This model is trained on a small synthetic dataset of characters interacting through a variety of scenarios.
The Characters, Scenarios and interactions were all generated by GPT4.
Intended for research, creative writing, entertainment, DnD campaigns? fun!
## Train Summary
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
```
duration: ~1.5hrs
gpu: 1xA100 80GB
epochs: 1.0
speed: 3e-5
sequence_len: 2048
gradient_accumulation_steps: 32
wandb: https://wandb.ai/practicaldreamer/rpgpt/runs/b3sznjpz
```
*Please see the documentation folder for more information*
## Usage
This LoRA was trained for use with **Neko-Institute-of-Science/LLaMA-13B-HF**
Please follow the prompt format outlined below. *Hint: If you're not sure what to put for your character description (or you're lazy) just ask chatgpt to generate it for you! Example:*
```
Generate a short character description for Dr. Watson (The Adventures of Sherlock Holmes) that includes gender, age, MBTI and speech accent using 30 words or less.
```
## Prompt Format
Context/Memory:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: Write a character roleplay dialogue using asterisk roleplay format based on the following character descriptions and scenario. (Each line in your response must be from the perspective of one of these characters)
## Characters
<User-Character Name> (<User-Character Universe>):
<User-Character Description>
<Bot-Character Name> (Bot-Character Universe):
<Bot-Character Description>
## Scenario
<Scenario Description>
ASSISTANT:
```
Turn Template:
```
<User-Character Name>: \*<1st person action/sensations/thoughts>\* <Spoken Word> \*<1st person action/sensations/thoughts>\*
<Bot-Character Name>: \*<1st person action/sensations/thoughts>\* <Spoken Word> \*<1st person action/sensations/thoughts>\*
<User-Character Name>: \*<1st person action/sensations/thoughts>\* <Spoken Word> \*<1st person action/sensations/thoughts>\*
<Bot-Character Name>: \*<1st person action/sensations/thoughts>\* <Spoken Word> \*<1st person action/sensations/thoughts>\*
...
```
## Example
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: Write a character roleplay dialogue using asterisk roleplay format based on the following character descriptions and scenario. (Each line in your response must be from the perspective of one of these characters)
## Characters
Baloo (The Jungle Book):
Male, middle-aged bear, ENFP, primarily American accent with slight Indian inflections. Wise, carefree, and friendly, he teaches Mowgli the ways of the jungle.
The Queen of Hearts (Alice's Adventures in Wonderland):
Female, middle-aged, ENTJ, British English accent; imperious, temperamental, commanding, but quick to change from anger to warmth - the quintessential capricious monarch.
## Scenario
In an enchanted glade deep within Wonderland, Baloo finds himself separated from Mowgli and in the Queen of Hearts' territory. Having heard tales of this mercurial ruler, Baloo is intrigued but cautious. To his surprise, the Queen of Hearts approaches him with a broken heart - she has been betrayed by one of her closest confidants.
The Queen initially sees Baloo as an opportunity to vent her frustrations and seek validation for her hurt feelings. Baloo, however, quickly recognizes that the Queen's explosive temperament is driving others away and contributing to her own isolation.\n\nUsing his wisdom and patience, Baloo tries to teach the Queen the value of empathy and understanding in forming meaningful relationships. Through their dialogue, they explore their contrasting attitudes toward power, authority, and friendship. The Queen learns that it is not through fear and command that she can find true loyalty, but rather by treating others with kindness.
In this somber environment where both characters are out of their comfort zones, can Baloo help the Queen mend her own heart by opening up about his experiences with Mowgli? Will the Queen allow herself to be vulnerable enough to learn from the wise bear? And can these two unlikely friends find common ground in this enchanted glade within Wonderland?
ASSISTANT:
Baloo: *stumbles into the enchanted glade, looking around curiously* Well, I must've taken a wrong turn back there. Mowgli! *calls out for Mowgli*
The Queen of Hearts: *storms into the glade, clutching a broken heart symbol, face contorted in anger* Off with their heads! Everyone's heads!
Baloo: *gently scratches his head and approaches the Queen cautiously* Uh, excuse me, ma'am? I couldn't help but notice that you seem rather... upset.
The Queen of Hearts: *sniffs haughtily and scowls at Baloo* And who are you to pry into my affairs? You should be bowing to your queen!
Baloo: *bows graciously but maintains eye contact* Begging your pardon, Your Majesty. My name is Baloo, and I've heard tales of your... unique temperament.
The Queen of Hearts:
```
## Thanks
openaccess-ai-collective https://huggingface.co/openaccess-ai-collective
---
license: mit
---
|
Enymy/t5-base-feedback-generator-saf
|
Enymy
| 2023-07-03T19:07:03Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-03T19:02:02Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: t5-base-feedback-generator-saf
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5-base-feedback-generator-saf
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
headflame02/Achaz
|
headflame02
| 2023-07-03T18:56:41Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-03T18:53:43Z |
---
license: creativeml-openrail-m
---
|
Sandrro/text_to_subfunction_v3
|
Sandrro
| 2023-07-03T18:52:57Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-03T17:24:03Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: text_to_subfunction_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_to_subfunction_v3
This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2521
- F1: 0.2335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.538 | 1.0 | 3330 | 4.4469 | 0.0626 |
| 3.7842 | 2.0 | 6660 | 3.8135 | 0.1243 |
| 3.3021 | 3.0 | 9990 | 3.4758 | 0.1942 |
| 3.0384 | 4.0 | 13320 | 3.3084 | 0.2238 |
| 2.843 | 5.0 | 16650 | 3.2521 | 0.2335 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.1.0.dev20230414+cu117
- Datasets 2.9.0
- Tokenizers 0.13.3
|
nolanaatama/vstzthllvd1000pchsrvcmgzb
|
nolanaatama
| 2023-07-03T18:52:02Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-03T18:45:44Z |
---
license: creativeml-openrail-m
---
|
geekyrakshit/DeepLabV3-Plus
|
geekyrakshit
| 2023-07-03T18:51:23Z | 60 | 0 |
keras
|
[
"keras",
"segmentation",
"tensorflow",
"cityscapes",
"arxiv:1802.02611",
"region:us"
] | null | 2023-07-03T17:32:36Z |
---
metrics:
- accuracy
- mean_iou
tags:
- segmentation
- keras
- tensorflow
- cityscapes
---
# DeepLabV3-Plus
Keras implementation of the DeepLabV3+ model as proposed by the paper [Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1802.02611)(ECCV 2018).
The models were trained on the fine-annotations set of the [Cityscapes dataset](cityscapes-dataset.com) for creating presets for [this PR](https://github.com/keras-team/keras-cv/pull/1831) on the `keras-cv` repository.
**Weights & Biases Dashboard:** https://wandb.ai/geekyrakshit/deeplabv3-keras-cv
|
zh-plus/faster-whisper-large-v2-japanese-5k-steps
|
zh-plus
| 2023-07-03T18:42:31Z | 289 | 16 |
transformers
|
[
"transformers",
"pytorch",
"faster-whisper",
"whisper",
"CTranslate2",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_11_0",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-03T08:29:37Z |
---
license: mit
datasets:
- mozilla-foundation/common_voice_11_0
language:
- ja
pipeline_tag: automatic-speech-recognition
tags:
- pytorch
- faster-whisper
- whisper
- CTranslate2
metrics:
- wer
---
Converted from [clu-ling/whisper-large-v2-japanese-5k-steps](https://huggingface.co/clu-ling/whisper-large-v2-japanese-5k-steps) using [CTranslate2](https://github.com/OpenNMT/CTranslate2).
Usage:
1. Install `pip install faster-whisper` (Check [faster-whisper](https://github.com/guillaumekln/faster-whisper) for detailed instructions.)
2. ```python
from faster_whisper import WhisperModel
model = WhisperModel('zh-plus/faster-whisper-large-v2-japanese-5k-steps', device="cuda", compute_type="float16")
segments, info = model.transcribe("audio.mp3", beam_size=5)
print("Detected language '%s' with probability %f" % (info.language, info.language_probability))
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
|
anujsahani01/finetuned_mbart
|
anujsahani01
| 2023-07-03T18:40:55Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-17T14:19:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: finetuned_Mbart
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_Mbart
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Shularp/TestHelsinkimulEnJpTh02
|
Shularp
| 2023-07-03T18:39:09Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-03T11:53:35Z |
---
tags:
- generated_from_trainer
model-index:
- name: TestHelsinkimulEnJpTh02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TestHelsinkimulEnJpTh02
This model is a fine-tuned version of [Shularp/TestHelsinkimulEnJpTh02](https://huggingface.co/Shularp/TestHelsinkimulEnJpTh02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1630
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.4364 | 1.0 | 4846 | 0.2666 |
| 0.1094 | 2.0 | 9692 | 0.2277 |
| 0.0484 | 3.0 | 14538 | 0.1940 |
| 0.0111 | 4.0 | 19384 | 0.1749 |
| 0.0105 | 5.0 | 24230 | 0.1630 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
falkne/QforJustification
|
falkne
| 2023-07-03T18:20:46Z | 2 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:argument/quality",
"roberta",
"region:us"
] | null | 2023-07-03T18:20:44Z |
---
tags:
- adapterhub:argument/quality
- roberta
- adapter-transformers
---
# Adapter `falkne/QforJustification` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("falkne/QforJustification", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
falkne/narration
|
falkne
| 2023-07-03T18:20:40Z | 4 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:argument/quality",
"roberta",
"region:us"
] | null | 2023-07-03T18:20:38Z |
---
tags:
- adapterhub:argument/quality
- roberta
- adapter-transformers
---
# Adapter `falkne/narration` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("falkne/narration", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
falkne/argumentative
|
falkne
| 2023-07-03T18:20:37Z | 2 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:argument/quality",
"roberta",
"region:us"
] | null | 2023-07-03T18:20:36Z |
---
tags:
- adapterhub:argument/quality
- roberta
- adapter-transformers
---
# Adapter `falkne/argumentative` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("falkne/argumentative", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
falkne/story
|
falkne
| 2023-07-03T18:20:36Z | 1 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:argument/quality",
"roberta",
"region:us"
] | null | 2023-07-03T18:20:34Z |
---
tags:
- adapterhub:argument/quality
- roberta
- adapter-transformers
---
# Adapter `falkne/story` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("falkne/story", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
falkne/reasonableness
|
falkne
| 2023-07-03T18:20:30Z | 3 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:argument/quality",
"roberta",
"region:us"
] | null | 2023-07-03T18:20:28Z |
---
tags:
- adapterhub:argument/quality
- roberta
- adapter-transformers
---
# Adapter `falkne/reasonableness` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("falkne/reasonableness", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
falkne/reflexivity
|
falkne
| 2023-07-03T18:20:28Z | 1 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:argument/quality",
"roberta",
"region:us"
] | null | 2023-07-03T18:20:26Z |
---
tags:
- adapterhub:argument/quality
- roberta
- adapter-transformers
---
# Adapter `falkne/reflexivity` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("falkne/reflexivity", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
falkne/negEmotion
|
falkne
| 2023-07-03T18:20:24Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:argument/quality",
"roberta",
"region:us"
] | null | 2023-07-03T18:20:23Z |
---
tags:
- adapterhub:argument/quality
- roberta
- adapter-transformers
---
# Adapter `falkne/negEmotion` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("falkne/negEmotion", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
falkne/ibm_rank
|
falkne
| 2023-07-03T18:20:22Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:argument/quality",
"roberta",
"region:us"
] | null | 2023-07-03T18:20:21Z |
---
tags:
- adapterhub:argument/quality
- roberta
- adapter-transformers
---
# Adapter `falkne/ibm_rank` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("falkne/ibm_rank", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
falkne/posEmotion
|
falkne
| 2023-07-03T18:20:20Z | 1 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:argument/quality",
"roberta",
"region:us"
] | null | 2023-07-03T18:20:19Z |
---
tags:
- adapterhub:argument/quality
- roberta
- adapter-transformers
---
# Adapter `falkne/posEmotion` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("falkne/posEmotion", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
falkne/interactivity
|
falkne
| 2023-07-03T18:20:18Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:argument/quality",
"roberta",
"region:us"
] | null | 2023-07-03T18:20:17Z |
---
tags:
- adapterhub:argument/quality
- roberta
- adapter-transformers
---
# Adapter `falkne/interactivity` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("falkne/interactivity", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
falkne/overall
|
falkne
| 2023-07-03T18:20:16Z | 2 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:argument/quality",
"roberta",
"region:us"
] | null | 2023-07-03T18:20:15Z |
---
tags:
- adapterhub:argument/quality
- roberta
- adapter-transformers
---
# Adapter `falkne/overall` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("falkne/overall", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
falkne/empathie
|
falkne
| 2023-07-03T18:20:14Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:argument/quality",
"roberta",
"region:us"
] | null | 2023-07-03T18:20:13Z |
---
tags:
- adapterhub:argument/quality
- roberta
- adapter-transformers
---
# Adapter `falkne/empathie` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("falkne/empathie", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
falkne/proposal
|
falkne
| 2023-07-03T18:20:12Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:argument/quality",
"roberta",
"region:us"
] | null | 2023-07-03T18:17:56Z |
---
tags:
- adapterhub:argument/quality
- roberta
- adapter-transformers
---
# Adapter `falkne/proposal` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("falkne/proposal", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
falkne/effectiveness
|
falkne
| 2023-07-03T18:20:09Z | 2 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:argument/quality",
"roberta",
"region:us"
] | null | 2023-07-03T18:17:55Z |
---
tags:
- adapterhub:argument/quality
- roberta
- adapter-transformers
---
# Adapter `falkne/effectiveness` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [argument/quality](https://adapterhub.ml/explore/argument/quality/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("roberta-base")
adapter_name = model.load_adapter("falkne/effectiveness", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
BBAI/qlora-koalpaca-polyglot-12.8b-50step
|
BBAI
| 2023-07-03T18:06:07Z | 5 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T06:33:23Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
osiria/bert-tweet-base-italian-uncased
|
osiria
| 2023-07-03T17:57:30Z | 173 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"it",
"arxiv:1810.04805",
"arxiv:2209.07562",
"arxiv:2010.05609",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-05-29T17:25:55Z |
---
license: apache-2.0
language:
- it
widget:
- text: "una fantastica [MASK] di #calcio! grande prestazione del mister e della squadra"
example_title: "Example 1"
- text: "il governo [MASK] dovrebbe fare politica, non soltanto propaganda! #vergogna"
example_title: "Example 2"
- text: "che serata da sogno sul #redcarpet! grazie a tutti gli attori e registi del [MASK] italiano #oscar #awards"
example_title: "Example 3"
---
--------------------------------------------------------------------------------------------------
<body>
<span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;"> Model: BERT-TWEET</span>
<br>
<span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;"> Lang: IT</span>
<br>
<span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span>
</body>
--------------------------------------------------------------------------------------------------
<h3>Model description</h3>
This is a <b>BERT</b> <b>[1]</b> uncased model for the <b>Italian</b> language, obtained using <b>TwHIN-BERT</b> <b>[2]</b> ([twhin-bert-base](https://huggingface.co/Twitter/twhin-bert-base)) as a starting point and focusing it on the Italian language by modifying the embedding layer
(as in <b>[3]</b>, computing document-level frequencies over the <b>Wikipedia</b> dataset)
The resulting model has 110M parameters, a vocabulary of 30.520 tokens, and a size of ~440 MB.
<h3>Quick usage</h3>
```python
from transformers import BertTokenizerFast, BertModel
tokenizer = BertTokenizerFast.from_pretrained("osiria/bert-tweet-base-italian-uncased")
model = BertModel.from_pretrained("osiria/bert-tweet-base-italian-uncased")
```
Here you can find the find the model already fine-tuned on Sentiment Analysis: https://huggingface.co/osiria/bert-tweet-italian-uncased-sentiment
<h3>References</h3>
[1] https://arxiv.org/abs/1810.04805
[2] https://arxiv.org/abs/2209.07562
[3] https://arxiv.org/abs/2010.05609
<h3>Limitations</h3>
This model was trained on tweets, so it's mainly suitable for general-purpose social media text processing, involving short texts written in a social network style.
It might show limitations when it comes to longer and more structured text, or domain-specific text.
<h3>License</h3>
The model is released under <b>Apache-2.0</b> license
|
hopkins/eng-kor-simcse.dev2.44k
|
hopkins
| 2023-07-03T17:51:10Z | 92 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T17:38:07Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-kor-simcse.dev2.44k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-kor-simcse.dev2.44k
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9818
- Bleu: 7.4953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
alesthehuman/a2c-AntBulletEnv-v0
|
alesthehuman
| 2023-07-03T17:49:19Z | 1 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-03T13:22:59Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 2548.33 +/- 83.37
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
wcKd/ppo-Huggy
|
wcKd
| 2023-07-03T17:45:09Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-03T17:44:59Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: wcKd/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
andersonbcdefg/pythia_samsum_adapter
|
andersonbcdefg
| 2023-07-03T17:43:03Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-03T17:43:02Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
alldaypa/autotrain-nyc_airbnb-71855138766
|
alldaypa
| 2023-07-03T17:41:54Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain",
"summarization",
"en",
"dataset:alldaypa/autotrain-data-nyc_airbnb",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-07-03T17:38:04Z |
---
tags:
- autotrain
- summarization
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- alldaypa/autotrain-data-nyc_airbnb
co2_eq_emissions:
emissions: 0.56063822288617
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 71855138766
- CO2 Emissions (in grams): 0.5606
## Validation Metrics
- Loss: 3.502
- Rouge1: 16.234
- Rouge2: 2.784
- RougeL: 14.048
- RougeLsum: 15.348
- Gen Len: 19.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/alldaypa/autotrain-nyc_airbnb-71855138766
```
|
WALIDALI/osamliby
|
WALIDALI
| 2023-07-03T17:38:42Z | 29 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-03T17:35:18Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### osamliby Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
cdreetz/codeparrot-ds2
|
cdreetz
| 2023-07-03T17:31:45Z | 23 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-15T19:08:28Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds2
GPT-2 style trained on a filtered set of The Stack, specific to data science related code. Things like pandas, numpy, matplotlib, etc.
- Loss: 1.0584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.2038 | 0.01 | 500 | 2.1062 |
| 2.0551 | 0.02 | 1000 | 2.0109 |
| 1.9622 | 0.02 | 1500 | 1.9219 |
| 1.9512 | 0.03 | 2000 | 1.8461 |
| 1.8817 | 0.04 | 2500 | 1.7903 |
| 1.8341 | 0.05 | 3000 | 1.7401 |
| 1.7877 | 0.05 | 3500 | 1.7022 |
| 1.7586 | 0.06 | 4000 | 1.6694 |
| 1.7271 | 0.07 | 4500 | 1.6457 |
| 1.7034 | 0.08 | 5000 | 1.6193 |
| 1.6756 | 0.08 | 5500 | 1.5978 |
| 1.6576 | 0.09 | 6000 | 1.5772 |
| 1.6377 | 0.1 | 6500 | 1.5611 |
| 1.6211 | 0.11 | 7000 | 1.5453 |
| 1.6033 | 0.11 | 7500 | 1.5317 |
| 1.591 | 0.12 | 8000 | 1.5193 |
| 1.5765 | 0.13 | 8500 | 1.5053 |
| 1.5661 | 0.14 | 9000 | 1.4966 |
| 1.5548 | 0.15 | 9500 | 1.4846 |
| 1.5429 | 0.15 | 10000 | 1.4729 |
| 1.5347 | 0.16 | 10500 | 1.4641 |
| 1.5215 | 0.17 | 11000 | 1.4557 |
| 1.5151 | 0.18 | 11500 | 1.4454 |
| 1.5059 | 0.18 | 12000 | 1.4381 |
| 1.499 | 0.19 | 12500 | 1.4288 |
| 1.4906 | 0.2 | 13000 | 1.4210 |
| 1.4849 | 0.21 | 13500 | 1.4143 |
| 1.4765 | 0.21 | 14000 | 1.4085 |
| 1.4708 | 0.22 | 14500 | 1.4026 |
| 1.4602 | 0.23 | 15000 | 1.3936 |
| 1.4533 | 0.24 | 15500 | 1.3896 |
| 1.4523 | 0.25 | 16000 | 1.3818 |
| 1.4415 | 0.25 | 16500 | 1.3748 |
| 1.4417 | 0.26 | 17000 | 1.3701 |
| 1.4311 | 0.27 | 17500 | 1.3645 |
| 1.4282 | 0.28 | 18000 | 1.3585 |
| 1.4223 | 0.28 | 18500 | 1.3531 |
| 1.4165 | 0.29 | 19000 | 1.3473 |
| 1.4105 | 0.3 | 19500 | 1.3419 |
| 1.3993 | 0.31 | 20000 | 1.3374 |
| 1.4034 | 0.31 | 20500 | 1.3322 |
| 1.3982 | 0.32 | 21000 | 1.3278 |
| 1.3951 | 0.33 | 21500 | 1.3225 |
| 1.3806 | 0.34 | 22000 | 1.3180 |
| 1.3781 | 0.34 | 22500 | 1.3121 |
| 1.3761 | 0.35 | 23000 | 1.3082 |
| 1.3662 | 0.36 | 23500 | 1.3038 |
| 1.3631 | 0.37 | 24000 | 1.2995 |
| 1.3549 | 0.38 | 24500 | 1.2955 |
| 1.3577 | 0.38 | 25000 | 1.2912 |
| 1.3498 | 0.39 | 25500 | 1.2851 |
| 1.3428 | 0.4 | 26000 | 1.2807 |
| 1.342 | 0.41 | 26500 | 1.2768 |
| 1.3365 | 0.41 | 27000 | 1.2720 |
| 1.3313 | 0.42 | 27500 | 1.2678 |
| 1.3309 | 0.43 | 28000 | 1.2629 |
| 1.3221 | 0.44 | 28500 | 1.2594 |
| 1.3214 | 0.44 | 29000 | 1.2558 |
| 1.3099 | 0.45 | 29500 | 1.2510 |
| 1.31 | 0.46 | 30000 | 1.2449 |
| 1.31 | 0.47 | 30500 | 1.2414 |
| 1.305 | 0.48 | 31000 | 1.2390 |
| 1.2975 | 0.48 | 31500 | 1.2358 |
| 1.2882 | 0.49 | 32000 | 1.2311 |
| 1.2831 | 0.5 | 32500 | 1.2251 |
| 1.2836 | 0.51 | 33000 | 1.2212 |
| 1.2817 | 0.51 | 33500 | 1.2178 |
| 1.2772 | 0.52 | 34000 | 1.2130 |
| 1.2651 | 0.53 | 34500 | 1.2080 |
| 1.2683 | 0.54 | 35000 | 1.2048 |
| 1.2581 | 0.54 | 35500 | 1.1999 |
| 1.263 | 0.55 | 36000 | 1.1972 |
| 1.255 | 0.56 | 36500 | 1.1924 |
| 1.2466 | 0.57 | 37000 | 1.1884 |
| 1.2448 | 0.57 | 37500 | 1.1860 |
| 1.2413 | 0.58 | 38000 | 1.1804 |
| 1.2362 | 0.59 | 38500 | 1.1782 |
| 1.2309 | 0.6 | 39000 | 1.1732 |
| 1.2289 | 0.61 | 39500 | 1.1687 |
| 1.2208 | 0.61 | 40000 | 1.1649 |
| 1.2225 | 0.62 | 40500 | 1.1605 |
| 1.2178 | 0.63 | 41000 | 1.1555 |
| 1.208 | 0.64 | 41500 | 1.1533 |
| 1.2069 | 0.64 | 42000 | 1.1490 |
| 1.206 | 0.65 | 42500 | 1.1453 |
| 1.2013 | 0.66 | 43000 | 1.1414 |
| 1.2003 | 0.67 | 43500 | 1.1374 |
| 1.1867 | 0.67 | 44000 | 1.1337 |
| 1.187 | 0.68 | 44500 | 1.1302 |
| 1.188 | 0.69 | 45000 | 1.1270 |
| 1.179 | 0.7 | 45500 | 1.1237 |
| 1.1866 | 0.71 | 46000 | 1.1204 |
| 1.173 | 0.71 | 46500 | 1.1173 |
| 1.1706 | 0.72 | 47000 | 1.1134 |
| 1.1645 | 0.73 | 47500 | 1.1099 |
| 1.1641 | 0.74 | 48000 | 1.1063 |
| 1.1623 | 0.74 | 48500 | 1.1032 |
| 1.1561 | 0.75 | 49000 | 1.1006 |
| 1.1531 | 0.76 | 49500 | 1.0977 |
| 1.1569 | 0.77 | 50000 | 1.0950 |
| 1.1505 | 0.77 | 50500 | 1.0927 |
| 1.1473 | 0.78 | 51000 | 1.0902 |
| 1.1428 | 0.79 | 51500 | 1.0870 |
| 1.1412 | 0.8 | 52000 | 1.0844 |
| 1.1452 | 0.8 | 52500 | 1.0823 |
| 1.1391 | 0.81 | 53000 | 1.0805 |
| 1.1329 | 0.82 | 53500 | 1.0783 |
| 1.1295 | 0.83 | 54000 | 1.0764 |
| 1.125 | 0.84 | 54500 | 1.0746 |
| 1.1295 | 0.84 | 55000 | 1.0730 |
| 1.1247 | 0.85 | 55500 | 1.0711 |
| 1.1225 | 0.86 | 56000 | 1.0696 |
| 1.1235 | 0.87 | 56500 | 1.0680 |
| 1.1192 | 0.87 | 57000 | 1.0670 |
| 1.1189 | 0.88 | 57500 | 1.0654 |
| 1.1196 | 0.89 | 58000 | 1.0646 |
| 1.1152 | 0.9 | 58500 | 1.0635 |
| 1.1133 | 0.9 | 59000 | 1.0628 |
| 1.1126 | 0.91 | 59500 | 1.0619 |
| 1.1142 | 0.92 | 60000 | 1.0610 |
| 1.1112 | 0.93 | 60500 | 1.0605 |
| 1.1137 | 0.93 | 61000 | 1.0599 |
| 1.1127 | 0.94 | 61500 | 1.0595 |
| 1.1111 | 0.95 | 62000 | 1.0592 |
| 1.1121 | 0.96 | 62500 | 1.0588 |
| 1.1114 | 0.97 | 63000 | 1.0587 |
| 1.1121 | 0.97 | 63500 | 1.0585 |
| 1.1078 | 0.98 | 64000 | 1.0584 |
| 1.1104 | 0.99 | 64500 | 1.0584 |
| 1.1057 | 1.0 | 65000 | 1.0584 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Sourabh2/spaceinvandernoframeship-v2
|
Sourabh2
| 2023-07-03T17:28:00Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-03T17:26:59Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 229.50 +/- 112.19
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Sourabh2 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Sourabh2 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Sourabh2
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
DanialAmin/InsuranceLLM
|
DanialAmin
| 2023-07-03T17:20:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-03T17:15:38Z |
---
license: tii-falcon-llm
---
|
felipec23/open-llama-3b
|
felipec23
| 2023-07-03T16:45:32Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-03T16:45:30Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
Wongstein/vide-noir
|
Wongstein
| 2023-07-03T16:39:18Z | 175 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"text-generation-inference",
"en",
"dataset:amazon_us_reviews",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-03T16:13:16Z |
---
license: creativeml-openrail-m
datasets:
- amazon_us_reviews
language:
- en
tags:
- text-generation-inference
---
|
matsia/huggy
|
matsia
| 2023-07-03T16:36:56Z | 14 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-03T16:36:53Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: matsia/huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
HeshamMamdouh/AraBart-sum-fine-tuned
|
HeshamMamdouh
| 2023-07-03T16:14:26Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"mbart",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-03T16:14:10Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: AraBart-sum-fine-tuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# AraBart-sum-fine-tuned
This model is a fine-tuned version of [abdalrahmanshahrour/AraBART-summ](https://huggingface.co/abdalrahmanshahrour/AraBART-summ) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
khalidbutt/k
|
khalidbutt
| 2023-07-03T16:09:24Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2023-07-03T16:09:24Z |
---
license: bigscience-bloom-rail-1.0
---
|
FabriLluvia/BOT
|
FabriLluvia
| 2023-07-03T16:03:08Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"code",
"fill-mask",
"es",
"en",
"dataset:OpenAssistant/oasst1",
"dataset:fka/awesome-chatgpt-prompts",
"license:apache-2.0",
"region:us"
] |
fill-mask
| 2023-07-03T16:01:17Z |
---
license: apache-2.0
datasets:
- OpenAssistant/oasst1
- fka/awesome-chatgpt-prompts
language:
- es
- en
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: fill-mask
tags:
- code
---
|
TootToot/ppo-Huggy
|
TootToot
| 2023-07-03T15:45:15Z | 32 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-05-11T16:20:32Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: TootToot/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jwelch1624/rare-puppers
|
jwelch1624
| 2023-07-03T15:23:14Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-03T15:23:07Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9090909361839294
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu

|
hopkins/eng-mya-wsample.49
|
hopkins
| 2023-07-03T15:17:37Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T14:56:40Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-wsample.49
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-wsample.49
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8303
- Bleu: 4.7616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mcamara/dqn-SpaceInvadersNoFrameskip-v4
|
mcamara
| 2023-07-03T15:12:44Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-03T15:12:05Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 622.00 +/- 197.35
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mcamara -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mcamara -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mcamara
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
haris001/alpaca_tweet_sentiment
|
haris001
| 2023-07-03T15:07:06Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-03T15:03:05Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
kresnik/wav2vec2-large-xlsr-korean
|
kresnik
| 2023-07-03T14:55:40Z | 1,123,517 | 38 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"speech",
"audio",
"ko",
"dataset:kresnik/zeroth_korean",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ko
datasets:
- kresnik/zeroth_korean
tags:
- speech
- audio
- automatic-speech-recognition
license: apache-2.0
model-index:
- name: 'Wav2Vec2 XLSR Korean'
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Zeroth Korean
type: kresnik/zeroth_korean
args: clean
metrics:
- name: Test WER
type: wer
value: 4.74
- name: Test CER
type: cer
value: 1.78
---
## Evaluation on Zeroth-Korean ASR corpus
[Google colab notebook(Korean)](https://colab.research.google.com/github/indra622/tutorials/blob/master/wav2vec2_korean_tutorial.ipynb)
```
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset
import soundfile as sf
import torch
from jiwer import wer
processor = Wav2Vec2Processor.from_pretrained("kresnik/wav2vec2-large-xlsr-korean")
model = Wav2Vec2ForCTC.from_pretrained("kresnik/wav2vec2-large-xlsr-korean").to('cuda')
ds = load_dataset("kresnik/zeroth_korean", "clean")
test_ds = ds['test']
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
test_ds = test_ds.map(map_to_array)
def map_to_pred(batch):
inputs = processor(batch["speech"], sampling_rate=16000, return_tensors="pt", padding="longest")
input_values = inputs.input_values.to("cuda")
with torch.no_grad():
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = test_ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
### Expected WER: 4.74%
### Expected CER: 1.78%
|
LukeMoore11/Big-Benjamin
|
LukeMoore11
| 2023-07-03T14:44:11Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"en",
"dataset:LukeMoore11/autotrain-data-second-attempt",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-06-21T22:08:19Z |
---
tags:
- summarization
language:
- en
widget:
- text: "Enter legal document..."
datasets:
- LukeMoore11/autotrain-data-second-attempt
co2_eq_emissions:
emissions: 67.54051067286701
---
## Validation Metrics
- Loss: 1.379
- Rouge1: 24.817
- Rouge2: 20.238
- RougeL: 24.044
- RougeLsum: 24.222
|
Phips/q-FrozenLake-v1-4x4-noSlippery
|
Phips
| 2023-07-03T14:42:44Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-03T14:42:40Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Phips/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jgranie/lunarlanderv2
|
jgranie
| 2023-07-03T14:38:15Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-03T14:37:53Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.30 +/- 16.47
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
WALIDALI/marimwly
|
WALIDALI
| 2023-07-03T14:21:53Z | 29 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-03T14:09:10Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### marimwly Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
iamkzntsv/ddpm-celebahq-finetuned-vintage-faces-16epochs
|
iamkzntsv
| 2023-07-03T14:00:49Z | 0 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"region:us"
] |
unconditional-image-generation
| 2023-07-03T13:56:34Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model based on the tutorial from Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
DDPM model trained on Celeba-HQ and fine tuned to generate vintage-styled face images
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('iamkzntsv/ddpm-celebahq-finetuned-vintage-faces-16epochs')
image = pipeline().images[0]
image
```
|
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-03_test
|
jordyvl
| 2023-07-03T13:53:49Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-03T13:47:33Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-03_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-03_test
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6984
- Accuracy: 0.13
- Exit 0 Accuracy: 0.0675
- Exit 1 Accuracy: 0.0725
- Exit 2 Accuracy: 0.1125
- Exit 3 Accuracy: 0.0625
- Exit 4 Accuracy: 0.0625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 72
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|
| No log | 0.99 | 11 | 2.7287 | 0.13 | 0.07 | 0.0725 | 0.115 | 0.0625 | 0.0625 |
| No log | 1.99 | 22 | 2.6984 | 0.13 | 0.0675 | 0.0725 | 0.1125 | 0.0625 | 0.0625 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
iammartian0/distilhubert-finetuned-gtzan
|
iammartian0
| 2023-07-03T13:52:49Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-03T10:17:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5528
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1578 | 0.99 | 56 | 2.1203 | 0.55 |
| 1.6815 | 2.0 | 113 | 1.6607 | 0.57 |
| 1.2921 | 2.99 | 169 | 1.2421 | 0.64 |
| 1.0324 | 4.0 | 226 | 1.0260 | 0.7 |
| 0.8661 | 4.99 | 282 | 0.8973 | 0.7 |
| 0.6192 | 6.0 | 339 | 0.7420 | 0.79 |
| 0.5437 | 6.99 | 395 | 0.6951 | 0.8 |
| 0.4917 | 8.0 | 452 | 0.6996 | 0.78 |
| 0.3868 | 8.99 | 508 | 0.6648 | 0.81 |
| 0.3816 | 10.0 | 565 | 0.6584 | 0.79 |
| 0.1935 | 10.99 | 621 | 0.6101 | 0.84 |
| 0.128 | 12.0 | 678 | 0.5445 | 0.85 |
| 0.1144 | 12.99 | 734 | 0.5703 | 0.84 |
| 0.0828 | 14.0 | 791 | 0.5632 | 0.83 |
| 0.0928 | 14.87 | 840 | 0.5528 | 0.84 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Avanthika/language-translation
|
Avanthika
| 2023-07-03T13:49:45Z | 24 | 2 |
transformers
|
[
"transformers",
"text2text-generation",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-03T06:57:46Z |
---
pipeline_tag: text2text-generation
---
# Language Translation English to Kannada
This is a language translation model of 3 encoders and decoders in transformer.It translates english to kannada sentence
---
datasets:
- kannada.txt
- english.txt
---
|
juliensimon/autotrain-food101-1471154050
|
juliensimon
| 2023-07-03T13:43:38Z | 198 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"autotrain",
"vision",
"image-classification",
"dataset:juliensimon/autotrain-data-food101",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-09-15T12:42:31Z |
---
tags:
- autotrain
- vision
- image-classification
datasets:
- juliensimon/autotrain-data-food101
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 135.18748471833436
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1471154050
- CO2 Emissions (in grams): 135.1875
## Validation Metrics
- Loss: 0.391
- Accuracy: 0.890
- Macro F1: 0.890
- Micro F1: 0.890
- Weighted F1: 0.890
- Macro Precision: 0.892
- Micro Precision: 0.890
- Weighted Precision: 0.892
- Macro Recall: 0.890
- Micro Recall: 0.890
- Weighted Recall: 0.890
|
juliensimon/autonlp-reuters-summarization-31447312
|
juliensimon
| 2023-07-03T13:43:01Z | 111 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"pegasus",
"text2text-generation",
"autonlp",
"en",
"dataset:juliensimon/autonlp-data-reuters-summarization",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- juliensimon/autonlp-data-reuters-summarization
co2_eq_emissions: 206.46626351359515
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 31447312
- CO2 Emissions (in grams): 206.46626351359515
## Validation Metrics
- Loss: 1.1907752752304077
- Rouge1: 55.9215
- Rouge2: 30.7724
- RougeL: 53.185
- RougeLsum: 53.3353
- Gen Len: 15.1236
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/juliensimon/autonlp-reuters-summarization-31447312
```
|
dcarpintero/q-FrozenLake-v1-4x4-noSlippery
|
dcarpintero
| 2023-07-03T13:41:08Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-03T13:41:06Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dcarpintero/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AndreNasci/ppo-Huggy
|
AndreNasci
| 2023-07-03T13:26:19Z | 12 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-03T13:26:09Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: AndreNasci/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Khushnur/t5-base-end2end-questions-generation_eli_squad
|
Khushnur
| 2023-07-03T13:17:24Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:eli5_cleaned_datav3_60k",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-29T18:54:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- eli5_cleaned_datav3_60k
model-index:
- name: t5-base-end2end-questions-generation_eli_squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-end2end-questions-generation_eli_squad
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the eli5_cleaned_datav3_60k dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7426 | 0.26 | 100 | 2.4735 |
| 2.305 | 0.52 | 200 | 2.4169 |
| 2.2034 | 0.78 | 300 | 2.3887 |
| 2.1562 | 1.04 | 400 | 2.3710 |
| 2.0883 | 1.31 | 500 | 2.3574 |
| 2.07 | 1.57 | 600 | 2.3492 |
| 2.0595 | 1.83 | 700 | 2.3433 |
| 2.0337 | 2.09 | 800 | 2.3384 |
| 2.0012 | 2.35 | 900 | 2.3353 |
| 2.0175 | 2.61 | 1000 | 2.3320 |
| 2.0035 | 2.87 | 1100 | 2.3313 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/gpt2-cl-concat-rarity-mod-datasets-6
|
NasimB
| 2023-07-03T13:08:58Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-03T11:10:33Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-cl-concat-rarity-mod-datasets-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-cl-concat-rarity-mod-datasets-6
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.6082 | 0.06 | 500 | 5.8581 |
| 5.3496 | 0.11 | 1000 | 5.4574 |
| 5.0066 | 0.17 | 1500 | 5.2413 |
| 4.7806 | 0.22 | 2000 | 5.1099 |
| 4.6202 | 0.28 | 2500 | 5.0191 |
| 4.4997 | 0.33 | 3000 | 4.9599 |
| 4.3878 | 0.39 | 3500 | 4.9168 |
| 4.2858 | 0.44 | 4000 | 4.8861 |
| 4.1858 | 0.5 | 4500 | 4.8493 |
| 4.0947 | 0.55 | 5000 | 4.8152 |
| 4.0087 | 0.61 | 5500 | 4.8013 |
| 3.9228 | 0.66 | 6000 | 4.7840 |
| 3.8464 | 0.72 | 6500 | 4.7652 |
| 3.7884 | 0.78 | 7000 | 4.7589 |
| 3.7366 | 0.83 | 7500 | 4.7531 |
| 3.7018 | 0.89 | 8000 | 4.7470 |
| 3.6791 | 0.94 | 8500 | 4.7431 |
| 3.6709 | 1.0 | 9000 | 4.7433 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
veluchs/whisper-tiny-us
|
veluchs
| 2023-07-03T13:06:17Z | 86 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-03T12:43:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-us
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[450:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.33943329397874855
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-us
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6329
- Wer Ortho: 0.3430
- Wer: 0.3394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0009 | 17.86 | 500 | 0.6329 | 0.3430 | 0.3394 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dsl1/pokemon-lora
|
dsl1
| 2023-07-03T12:52:31Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-14T06:03:44Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - dsl1/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
msladic/Reinforce-Pixelcopter-PLE-v0
|
msladic
| 2023-07-03T12:51:10Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-03T10:03:43Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 21.70 +/- 12.40
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
hopkins/mbart-finetuned-eng-ind-longest
|
hopkins
| 2023-07-03T12:45:11Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T12:26:25Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-ind-longest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-ind-longest
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7474
- Bleu: 21.9863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
devgupta/falcon-7b-tax
|
devgupta
| 2023-07-03T12:35:56Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-03T12:29:18Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
renatoneto14/HuggyTraining
|
renatoneto14
| 2023-07-03T12:29:30Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-03T12:28:24Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: renatoneto14/HuggyTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hopkins/mbart-finetuned-eng-deu-longest
|
hopkins
| 2023-07-03T12:25:56Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T12:06:22Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-longest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-longest
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6322
- Bleu: 20.9315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/mbart-finetuned-eng-deu-random
|
hopkins
| 2023-07-03T12:25:38Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-03T12:06:16Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-deu-random
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-deu-random
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6656
- Bleu: 20.8048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DEplain/trimmed_mbart_sents_apa_web
|
DEplain
| 2023-07-03T12:09:30Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"text simplification",
"plain language",
"easy-to-read language",
"sentence simplification",
"de",
"dataset:DEplain/DEplain-APA-sent",
"dataset:DEplain/DEplain-web-sent",
"arxiv:2305.18939",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-01T14:45:33Z |
---
datasets:
- DEplain/DEplain-APA-sent
- DEplain/DEplain-web-sent
language:
- de
metrics:
- sari
- bleu
- bertscore
library_name: transformers
pipeline_tag: text2text-generation
tags:
- text simplification
- plain language
- easy-to-read language
- sentence simplification
---
# DEplain German Text Simplification
This model belongs to the experiments done at the work of Stodden, Momen, Kallmeyer (2023). ["DEplain: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification."](https://arxiv.org/abs/2305.18939) In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Toronto, Canada. Association for Computational Linguistics.
Detailed documentation can be found on this GitHub repository [https://github.com/rstodden/DEPlain](https://github.com/rstodden/DEPlain)
### Model Description
The model is a finetuned checkpoint of the pre-trained mBART model `mbart-large-cc25`. With a trimmed vocabulary to the most frequent 30k words in the German language.
The model was finetuned towards the task of German text simplification of sentences.
The finetuning dataset included manually aligned sentences from the datasets `DEplain-APA-sent` and `DEplain-web-sent-manual-open`
|
deepsense-ai/trelbert
|
deepsense-ai
| 2023-07-03T12:01:15Z | 114 | 5 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"pl",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-09-15T12:03:45Z |
---
language: pl
license: cc-by-4.0
pipeline_tag: fill-mask
mask_token: "<mask>"
widget:
- text: "Sztuczna inteligencja to <mask>."
- text: "Robert Kubica jest najlepszym <mask>."
- text: "<mask> jest największym zdrajcą."
- text: "<mask> to najlepszy polski klub."
- text: "Twoja <mask>"
---
# TrelBERT
TrelBERT is a BERT-based Language Model trained on data from Polish Twitter using Masked Language Modeling objective. It is based on [HerBERT](https://aclanthology.org/2021.bsnlp-1.1) model and therefore released under the same license - CC BY 4.0.
## Training
We trained our model starting from [`herbert-base-cased`](https://huggingface.co/allegro/herbert-base-cased) checkpoint and continued MLM training using data collected from Twitter.
The data we used for MLM fine-tuning was approximately 45 million Polish tweets. We trained the model for 1 epoch with a learning rate `5e-5` and batch size `2184` using AdamW optimizer.
### Preprocessing
For each Tweet, the user handles that occur in the beginning of the text were removed, as they are not part of the message content but only represent who the user is replying to. The remaining user handles were replaced by "@anonymized_account". Links were replaced with a special @URL token.
## Tokenizer
We use HerBERT tokenizer with two special tokens added for preprocessing purposes as described above (@anonymized_account, @URL). Maximum sequence length is set to 128, based on the analysis of Twitter data distribution.
## License
CC BY 4.0
## KLEJ Benchmark results
We fine-tuned TrelBERT to [KLEJ benchmark](https://klejbenchmark.com) tasks and achieved the following results:
<style>
tr:last-child {
border-top-width: 4px;
}
</style>
|Task name|Score|
|--|--|
|NKJP-NER|94.4|
|CDSC-E|93.9|
|CDSC-R|93.6|
|CBD|76.1|
|PolEmo2.0-IN|89.3|
|PolEmo2.0-OUT|78.1|
|DYK|67.4|
|PSC|95.7|
|AR|86.1|
|__Average__|__86.1__|
For fine-tuning to KLEJ tasks we used [Polish RoBERTa](https://github.com/sdadas/polish-roberta) scripts, which we modified to use `transformers` library. For the CBD task, we set the maximum sequence length to 128 and implemented the same preprocessing procedure as in the MLM phase.
Our model achieved 1st place in cyberbullying detection (CBD) task in the [KLEJ leaderboard](https://klejbenchmark.com/leaderboard). Overall, it reached 7th place, just below HerBERT model.
## Citation
Please cite the following paper:
```
@inproceedings{szmyd-etal-2023-trelbert,
title = "{T}rel{BERT}: A pre-trained encoder for {P}olish {T}witter",
author = "Szmyd, Wojciech and
Kotyla, Alicja and
Zobni{\'o}w, Micha{\l} and
Falkiewicz, Piotr and
Bartczuk, Jakub and
Zygad{\l}o, Artur",
booktitle = "Proceedings of the 9th Workshop on Slavic Natural Language Processing 2023 (SlavicNLP 2023)",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bsnlp-1.3",
pages = "17--24",
abstract = "Pre-trained Transformer-based models have become immensely popular amongst NLP practitioners. We present TrelBERT {--} the first Polish language model suited for application in the social media domain. TrelBERT is based on an existing general-domain model and adapted to the language of social media by pre-training it further on a large collection of Twitter data. We demonstrate its usefulness by evaluating it in the downstream task of cyberbullying detection, in which it achieves state-of-the-art results, outperforming larger monolingual models trained on general-domain corpora, as well as multilingual in-domain models, by a large margin. We make the model publicly available. We also release a new dataset for the problem of harmful speech detection.",
}
```
## Authors
Jakub Bartczuk, Krzysztof Dziedzic, Piotr Falkiewicz, Alicja Kotyla, Wojciech Szmyd, Michał Zobniów, Artur Zygadło
For more information, reach out to us via e-mail: artur.zygadlo@deepsense.ai
|
velascoluis/falcon7b-instruct-database-ft
|
velascoluis
| 2023-07-03T11:50:55Z | 0 | 0 | null |
[
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-07-02T19:45:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: falcon7b-instruct-database-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon7b-instruct-database-ft
This model is a fine-tuned version of [tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4994
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Sandrro/text_to_function_v2
|
Sandrro
| 2023-07-03T11:50:51Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-03T10:31:44Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: text_to_function_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_to_function_v2
This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0580
- F1: 0.7937
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.9053 | 1.0 | 2925 | 0.8585 | 0.7410 |
| 0.6403 | 2.0 | 5850 | 0.8756 | 0.7693 |
| 0.4261 | 3.0 | 8775 | 0.9378 | 0.7872 |
| 0.3379 | 4.0 | 11700 | 1.0294 | 0.7925 |
| 0.2362 | 5.0 | 14625 | 1.0580 | 0.7937 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.1.0.dev20230414+cu117
- Datasets 2.9.0
- Tokenizers 0.13.3
|
searde/model-financial-documents
|
searde
| 2023-07-03T11:46:37Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:searde/dataset-financial-documents-2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-28T12:45:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- searde/dataset-financial-documents-2
metrics:
- rouge
model-index:
- name: tst-summarization
results:
- task:
name: Summarization
type: summarization
dataset:
name: searde/dataset-financial-documents-2 3.0.0
type: searde/dataset-financial-documents-2
config: 3.0.0
split: validation
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 90.0297
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tst-summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the searde/dataset-financial-documents-2 3.0.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0730
- Rouge1: 90.0297
- Rouge2: 68.9083
- Rougel: 89.8451
- Rougelsum: 89.9838
- Gen Len: 38.9598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
searde/model-financial-documents-3
|
searde
| 2023-07-03T11:46:05Z | 109 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:searde/dataset-financial-documents-3",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-29T08:20:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- searde/dataset-financial-documents-3
metrics:
- rouge
model-index:
- name: tst-summarization
results:
- task:
name: Summarization
type: summarization
dataset:
name: searde/dataset-financial-documents-3 3.0.0
type: searde/dataset-financial-documents-3
config: 3.0.0
split: validation
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 14.9574
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tst-summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the searde/dataset-financial-documents-3 3.0.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0505
- Rouge1: 14.9574
- Rouge2: 0.0
- Rougel: 8.4517
- Rougelsum: 12.4858
- Gen Len: 63.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ayushutkarsh/t3
|
ayushutkarsh
| 2023-07-03T11:35:55Z | 51 | 6 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"conversational",
"en",
"dataset:McGill-NLP/FaithDial",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-02T06:07:50Z |
---
license: apache-2.0
datasets:
- McGill-NLP/FaithDial
language:
- en
metrics:
- bleu
- bertscore
- accuracy
pipeline_tag: conversational
---
T3 stands for Terribly Tiny Transformers that are an efficient way of creating tiny distilled (student) models for hallucination-free LLM models in parameter-constrained environment (edge devices).
The base model is a T3 adaptation of T5 model. The paradigm of T3 models can be extended to all types of models ( encoder only, decoder only & seq2seq)
|
AMUseBot/roberta-base-cookdial-v1_1
|
AMUseBot
| 2023-07-03T11:31:15Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-17T09:35:53Z |
---
language:
- en
library_name: transformers
tags:
- text-classification
widget:
- text: "What ingredients do I need?"
---
- Baseline NLU model for the "AMUseBot" cooking taskbot prototype. Updated version with more robust req_ingredient intent recognition thanks to finetuning with extra synthetic data.
- ``roberta-base`` model finetuned with default hyperparameters for 7 epochs on intents from the CookDial (https://github.com/YiweiJiang2015/CookDial) dataset with an extra choose_recipe intent added. The ``simpletransformers`` library was used for fine-tuning.
- Intent mapping: {"0": "affirm", "1": "choose_recipe", "2": "confirm", "3": "goodbye", "4": "greeting", "5": "negate", "6": "other", "7": "req_amount", "8": "req_duration", "9": "req_ingredient", "10": "req_ingredient_list", "11": "req_ingredient_list_ends", "12": "req_ingredient_list_length", "13": "req_instruction", "14": "req_is_recipe_finished", "15": "req_is_recipe_ongoing", "16": "req_parallel_action", "17": "req_repeat", "18": "req_start", "19": "req_substitute", "20": "req_temperature", "21": "req_title", "22": "req_tool", "23": "req_use_all", "24": "thank"}.
|
KPrashanth/dqn-SpaceInvadersNoFrameskip-v4
|
KPrashanth
| 2023-07-03T11:23:49Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-03T11:23:07Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 761.00 +/- 316.10
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga KPrashanth -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga KPrashanth -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga KPrashanth
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 2000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
NasimB/gpt2-dp-mod-datasets
|
NasimB
| 2023-07-03T11:20:02Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-03T07:47:17Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-dp-mod-datasets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-dp-mod-datasets
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1587
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.721 | 0.28 | 500 | 5.6661 |
| 5.3704 | 0.55 | 1000 | 5.2444 |
| 5.0331 | 0.83 | 1500 | 4.9898 |
| 4.784 | 1.1 | 2000 | 4.8409 |
| 4.6004 | 1.38 | 2500 | 4.7323 |
| 4.5032 | 1.65 | 3000 | 4.6355 |
| 4.4157 | 1.93 | 3500 | 4.5419 |
| 4.2123 | 2.2 | 4000 | 4.5062 |
| 4.1323 | 2.48 | 4500 | 4.4562 |
| 4.1086 | 2.75 | 5000 | 4.3991 |
| 4.0432 | 3.03 | 5500 | 4.3667 |
| 3.8085 | 3.3 | 6000 | 4.3636 |
| 3.8151 | 3.58 | 6500 | 4.3268 |
| 3.7855 | 3.85 | 7000 | 4.2969 |
| 3.6519 | 4.13 | 7500 | 4.3076 |
| 3.5149 | 4.4 | 8000 | 4.3007 |
| 3.5086 | 4.68 | 8500 | 4.2851 |
| 3.4995 | 4.95 | 9000 | 4.2743 |
| 3.3468 | 5.23 | 9500 | 4.2884 |
| 3.3143 | 5.5 | 10000 | 4.2904 |
| 3.3138 | 5.78 | 10500 | 4.2893 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
zijun/autotrain-input_list-71788138727
|
zijun
| 2023-07-03T11:19:37Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:zijun/autotrain-data-input_list",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-03T11:19:08Z |
---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- zijun/autotrain-data-input_list
co2_eq_emissions:
emissions: 0.20160817247860105
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 71788138727
- CO2 Emissions (in grams): 0.2016
## Validation Metrics
- Loss: 0.261
- Accuracy: 0.882
- Precision: 0.926
- Recall: 0.926
- AUC: 0.931
- F1: 0.926
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/zijun/autotrain-input_list-71788138727
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("zijun/autotrain-input_list-71788138727", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("zijun/autotrain-input_list-71788138727", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
lx865712528/master-base-pretrained-msmarco
|
lx865712528
| 2023-07-03T11:04:17Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"feature-extraction",
"en",
"dataset:ms_marco",
"arxiv:2212.07841",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-07-03T10:19:24Z |
---
license: mit
datasets:
- ms_marco
language:
- en
pipeline_tag: feature-extraction
---
# MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers
Paper: [https://arxiv.org/abs/2212.07841](https://arxiv.org/abs/2212.07841).
Code: [https://github.com/microsoft/SimXNS/tree/main/MASTER](https://github.com/microsoft/SimXNS/tree/main/MASTER).
## Overview
This is the checkpoint after pretraining on the MS-MARCO corpus. **You may use this checkpoint as the initialization for finetuning.**
## Useage
To load this checkpoint for initialization, you may follow:
```python
from transformers import AutoModel
model = AutoModel.from_pretrained('lx865712528/master-base-pretrained-msmarco')
```
|
language-and-voice-lab/sbert-ruquad
|
language-and-voice-lab
| 2023-07-03T10:54:38Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"is",
"dataset:language-and-voice-lab/ruquad1",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-03T09:49:39Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: apache-2.0
datasets:
- language-and-voice-lab/ruquad1
language:
- is
---
# sbert-ruquad
sbert-ruquald is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
The model is based on the [distiluse-base-multilingual-cased-v2](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v2), fine-tuned on [RUQuAD](https://repository.clarin.is/repository/xmlui/handle/20.500.12537/310) - a question-answer dataset for Icelandic.
The data used for this model contains approximately question-span and question-paragraph pairs, with 14920 pairs used for training under the [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('language-and-voice-lab/sbert-ruquad')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('language-and-voice-lab/sbert-ruquad')
model = AutoModel.from_pretrained('language-and-voice-lab/sbert-ruquad')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
The model was evaluated with a hold-out set from the original data using the [BinaryClassificationEvaluator](https://www.sbert.net/docs/package_reference/evaluation.html?highlight=binaryclassificationevaluator#sentence_transformers.evaluation.BinaryClassificationEvaluator) approach.
| cossim_accuracy | cossim_f1 | cossim_precision | cossim_recall | cossim_ap | manhattan_accuracy | manhattan_f1 | manhattan_precision | manhattan_recall | manhattan_ap | euclidean_accuracy | euclidean_f1 | euclidean_precision | euclidean_recall | euclidean_ap | dot_accuracy | dot_f1 | dot_precision | dot_recall | dot_ap |
|-----------------|-------------|------------------|---------------|-------------|--------------------|--------------|---------------------|------------------|--------------|--------------------|--------------|---------------------|------------------|--------------|--------------|-------------|---------------|-------------|-------------|
| 0.913616792 | 0.910709318 | 0.942429476 | 0.881054898 | 0.968807199 | 0.869483315 | 0.856401384 | 0.922360248 | 0.799246502 | 0.932638132 | 0.869214209 | 0.857062937 | 0.892253931 | 0.824542519 | 0.932737722 | 0.914962325 | 0.911732456 | 0.929050279 | 0.895048439 | 0.968732732 |
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name="language-and-voice-lab/sbert-ruquad")
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 933 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 20,
"evaluation_steps": 500,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
Stefán Ólafsson (stefanola@ru.is) trained the model.
Njáll Skarphéðinsson et al. created the [RUQuAD dataset](https://repository.clarin.is/repository/xmlui/handle/20.500.12537/310).
|
mcamara/q-FrozenLake-v1-4x4-noSlippery
|
mcamara
| 2023-07-03T10:44:54Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-03T10:44:52Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.36 +/- 0.48
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mcamara/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
boleklolek/olka
|
boleklolek
| 2023-07-03T10:42:40Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-03T10:37:51Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### olka Dreambooth model trained by boleklolek with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
jordimas/bloom-ctranslate2
|
jordimas
| 2023-07-03T10:37:16Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2023-06-28T15:02:40Z |
---
license: bigscience-bloom-rail-1.0
---
# Bloom CTranslate2's model
This is a collection of some of the [Bigscience Bloom](https://huggingface.co/bigscience/bloom) exported to
[CTranslate2](https://github.com/OpenNMT/CTranslate2) model format. This allows to load and usage these models
efficently on CPU or GPU.
## Models
The models have been converted to *float16* and can be load in with any other quantification method (e.g. *int 8*).
| Model name | Description |
| --- | --- |
| [bloom-560m](https://huggingface.co/bigscience/bloom-560m) | 560M parameter model pretrained on ROOTS|
| [bloom-3b](https://huggingface.co/bigscience/bloom-3b) | 3B parameter model pretrained on ROOTS
| [bloomz-7b1](https://huggingface.co/bigscience/bloomz-7b1) | 7.1B parameter model finetuned on xP3|
| [bloomz-7b1-mt](https://huggingface.co/bigscience/bloomz-7b1-mt) | 7.1B parameter model finetuned on xP3mt |
| [mt0-xxl-mt](https://huggingface.co/bigscience/mt0-xxl-mt) | 13B parameter model finetuned on xP3|
See [directories](https://huggingface.co/jordimas/bloom-ctranslate2/tree/main) for the different models available.
## Simple code to use them
Install dependencies:
```shell
pip install huggingface_hub ctranslate2 transformers torch
```
Usage:
```python
import huggingface_hub
import ctranslate2
import transformers
model_name = "bloomz-7b1"
prompt = "Hello, I am Joan and I am from Barcelona and"
repo_id = "jordimas/bloom-ctranslate2"
snapshot_folder = huggingface_hub.snapshot_download(repo_id = repo_id, allow_patterns=f"*{model_name}*")
print(f"folder: {snapshot_folder}")
model = f"{snapshot_folder}/{model_name}"
generator = ctranslate2.Generator(model, compute_type="int8")
tokenizer = transformers.AutoTokenizer.from_pretrained(model)
start_tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt))
results = generator.generate_batch([start_tokens], max_length=90)
result = tokenizer.decode(results[0].sequences_ids[0])
print(f"Result: {result}")
```
|
T-Systems-onsite/cross-en-de-fr-roberta-sentence-transformer
|
T-Systems-onsite
| 2023-07-03T10:33:40Z | 12 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence_embedding",
"en",
"de",
"fr",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- en
- de
- fr
license: mit
tags:
- sentence_embedding
---
|
CogwiseAI/testchatexample
|
CogwiseAI
| 2023-07-03T10:30:57Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"RefinedWebModel",
"text-generation",
"custom_code",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-03T02:20:40Z |
---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
|
ZidanSink/Kayessss
|
ZidanSink
| 2023-07-03T10:11:35Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-03T10:09:49Z |
---
license: creativeml-openrail-m
---
|
ecwk/distilbert-git-commits-bugfix-classification
|
ecwk
| 2023-07-03T10:09:49Z | 103 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-03T10:08:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: distilbert-git-commits-bugfix-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-git-commits-bugfix-classification
This model is a fine-tuned version of [neuralsentry/distilbert-git-commits-mlm](https://huggingface.co/neuralsentry/distilbert-git-commits-mlm) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5037
- Accuracy: 0.9231
- Precision: 0.85
- Recall: 1.0
- F1: 0.9189
- Roc Auc: 0.9318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 420
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.6837 | 1.0 | 22 | 0.6040 | 0.5897 | 0.5161 | 0.9412 | 0.6667 | 0.6297 |
| 0.3852 | 2.0 | 44 | 0.2881 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 |
| 0.2148 | 3.0 | 66 | 0.3807 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 |
| 0.0701 | 4.0 | 88 | 0.4934 | 0.8718 | 0.7727 | 1.0 | 0.8718 | 0.8864 |
| 0.0164 | 5.0 | 110 | 0.4892 | 0.8974 | 0.8095 | 1.0 | 0.8947 | 0.9091 |
| 0.0039 | 6.0 | 132 | 0.4929 | 0.8974 | 0.8095 | 1.0 | 0.8947 | 0.9091 |
| 0.0012 | 7.0 | 154 | 0.4065 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 |
| 0.0008 | 8.0 | 176 | 0.4837 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 |
| 0.0007 | 9.0 | 198 | 0.5000 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 |
| 0.0006 | 10.0 | 220 | 0.5037 | 0.9231 | 0.85 | 1.0 | 0.9189 | 0.9318 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ak2704/ppo-Huggy
|
ak2704
| 2023-07-03T10:08:31Z | 18 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-03T10:08:04Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ak2704/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.