modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-30 06:27:36
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 527
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-30 06:27:12
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Aonodensetsu/codyblue-731
|
Aonodensetsu
| 2023-08-31T10:55:11Z | 0 | 0 | null |
[
"license:gpl-3.0",
"region:us"
] | null | 2023-08-15T12:04:23Z |
---
license: gpl-3.0
---
This is a mirror of CivitAI.
The style of artist **codyblue-731** trained for [Foxya v3](https://civitai.com/models/17138).
The preview image uses the prompt "\<lyco\> furry, femboy" - the recommended settings are epoch 11-15, strength 0.6-0.8.

|
Aonodensetsu/cromachina
|
Aonodensetsu
| 2023-08-31T10:54:31Z | 0 | 0 | null |
[
"license:gpl-3.0",
"region:us"
] | null | 2023-08-15T12:09:10Z |
---
license: gpl-3.0
---
This is a mirror of CivitAI.
The style of artist **cromachina** trained for [Foxya v3](https://civitai.com/models/17138).
The preview image uses the prompt "\<lyco\> 1girl" - the recommended settings are epoch 11-15, strength 0.5-0.8.

|
Aonodensetsu/delicious
|
Aonodensetsu
| 2023-08-31T10:54:12Z | 0 | 0 | null |
[
"license:gpl-3.0",
"region:us"
] | null | 2023-08-15T12:42:21Z |
---
license: gpl-3.0
---
This is a mirror of CivitAI.
The style of artist **delicious** trained for [Foxya v3](https://civitai.com/models/17138).
The preview image uses the prompt "\<lyco\> furry" - the recommended settings are epoch 12-15, strength 0.6-0.8.

|
abhishek/llama-2-7b-hf-guanaco-sr-1
|
abhishek
| 2023-08-31T10:54:05Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-08-31T08:09:34Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: llama-2-7b-hf-guanaco-sr-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-2-7b-hf-guanaco-sr-1
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Aonodensetsu/darkmirage
|
Aonodensetsu
| 2023-08-31T10:53:59Z | 0 | 0 | null |
[
"license:gpl-3.0",
"region:us"
] | null | 2023-08-15T12:24:00Z |
---
license: gpl-3.0
---
This is a mirror of CivitAI.
The style of artist **darkmirage** trained for [Foxya v3](https://civitai.com/models/17138).
The preview image uses the prompt "\<lyco\> furry" - the recommended settings are epoch 13-14, strength 0.5-0.7.

|
Aonodensetsu/frenky_hw
|
Aonodensetsu
| 2023-08-31T10:53:21Z | 0 | 0 | null |
[
"license:gpl-3.0",
"region:us"
] | null | 2023-08-15T12:53:19Z |
---
license: gpl-3.0
---
This is a mirror of CivitAI.
The style of artist **frenky_hw** trained for [Foxya v3](https://civitai.com/models/17138).
The preview image uses the prompt "\<lyco\> furry, male, girly" - the recommended settings are epoch 11-13, strength 0.6-0.8.

|
Aonodensetsu/gothbunnyboy
|
Aonodensetsu
| 2023-08-31T10:53:05Z | 0 | 0 | null |
[
"license:gpl-3.0",
"region:us"
] | null | 2023-08-15T12:57:11Z |
---
license: gpl-3.0
---
This is a mirror of CivitAI.
The style of artist **gothbunnyboy** trained for [Foxya v3](https://civitai.com/models/17138).
The preview image uses the prompt "\<lyco\> furry" - the recommended settings are epoch 11-15, strength 0.6-0.8.

|
vishnuhaasan/q-FrozenLake-v1-4x4-noSlippery
|
vishnuhaasan
| 2023-08-31T10:52:54Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-31T10:52:48Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="vishnuhaasan/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
Aonodensetsu/pumpkinspicelatte
|
Aonodensetsu
| 2023-08-31T10:52:39Z | 0 | 0 | null |
[
"license:gpl-3.0",
"region:us"
] | null | 2023-08-15T13:04:44Z |
---
license: gpl-3.0
---
This is a mirror of CivitAI.
The style of artist **pumpkinspicelatte** trained for [Foxya v3](https://civitai.com/models/17138).
The preview image uses the prompt "\<lyco\> 1girl" - the recommended settings are epoch 10-15, strength 0.6-0.9.

|
ardt-multipart/ardt-multipart-ppo_train_walker2d_level-3108_0934-33
|
ardt-multipart
| 2023-08-31T10:39:20Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-31T08:36:19Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-multipart-ppo_train_walker2d_level-3108_0934-33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-multipart-ppo_train_walker2d_level-3108_0934-33
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
SHENMU007/neunit_BASE_V9.5.9
|
SHENMU007
| 2023-08-31T10:36:34Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-08-31T09:35:39Z |
---
language:
- zh
license: mit
base_model: microsoft/speecht5_tts
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dt-and-vanilla-ardt/dt-ppo_train_hopper_level-3108_1003-66
|
dt-and-vanilla-ardt
| 2023-08-31T10:15:36Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-31T09:04:45Z |
---
tags:
- generated_from_trainer
model-index:
- name: dt-ppo_train_hopper_level-3108_1003-66
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dt-ppo_train_hopper_level-3108_1003-66
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
sumet/Test_Trocr_digit_handwriting
|
sumet
| 2023-08-31T09:52:03Z | 201 | 2 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"trocr",
"image-to-text",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2023-08-30T02:28:52Z |
---
tags:
- trocr
- image-to-text
---
|
phillipos99/ppo-LunarLander-v2
|
phillipos99
| 2023-08-31T09:51:38Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-31T09:51:20Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 279.40 +/- 17.75
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
vnktrmnb/MBERT_FT-TyDiQA_S67
|
vnktrmnb
| 2023-08-31T09:45:40Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"bert",
"question-answering",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-30T06:03:12Z |
---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_keras_callback
model-index:
- name: vnktrmnb/MBERT_FT-TyDiQA_S67
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vnktrmnb/MBERT_FT-TyDiQA_S67
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3185
- Train End Logits Accuracy: 0.9077
- Train Start Logits Accuracy: 0.9272
- Validation Loss: 0.5503
- Validation End Logits Accuracy: 0.875
- Validation Start Logits Accuracy: 0.9111
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2412, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.6586 | 0.8284 | 0.8598 | 0.5000 | 0.8737 | 0.9124 | 0 |
| 0.4565 | 0.8766 | 0.8978 | 0.5009 | 0.8776 | 0.9175 | 1 |
| 0.3185 | 0.9077 | 0.9272 | 0.5503 | 0.875 | 0.9111 | 2 |
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ukr-models/xlm-roberta-base-uk
|
ukr-models
| 2023-08-31T09:41:51Z | 526 | 12 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"fill-mask",
"ukrainian",
"uk",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-11T10:53:02Z |
---
language:
- uk
tags:
- ukrainian
widget:
- text: "Тарас Шевченко – великий український <mask>."
license: mit
---
This is a smaller version of the [XLM-RoBERTa](https://huggingface.co/xlm-roberta-base) model with only Ukrainian and some English embeddings left.
* The original model has 470M parameters, with 384M of them being input and output embeddings.
* After shrinking the `sentencepiece` vocabulary from 250K to 31K (top 25K Ukrainian tokens and top English tokens) the number of model parameters reduced to 134M parameters, and model size reduced from 1GB to 400MB.
|
ukr-models/uk-ner
|
ukr-models
| 2023-08-31T09:41:21Z | 188 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"token-classification",
"ukrainian",
"uk",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-04-07T05:31:07Z |
---
language:
- uk
tags:
- ukrainian
widget:
- text: "Могила Тараса Шевченка — місце поховання видатного українського поета Тараса Шевченка в місті Канів (Черкаська область) на Чернечій горі, над яким із 1939 року височіє бронзовий пам'ятник роботи скульптора Матвія Манізера."
license: mit
---
## Model Description
Fine-tuning of [XLM-RoBERTa-Uk](https://huggingface.co/ukr-models/xlm-roberta-base-uk) model on [synthetic NER dataset](https://huggingface.co/datasets/ukr-models/Ukr-Synth) with B-PER, I-PER, B-LOC, I-LOC, B-ORG, I-ORG tags
## How to Use
Huggingface pipeline way (returns tokens with labels):
```py
from transformers import pipeline, AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained('ukr-models/uk-ner')
model = AutoModelForTokenClassification.from_pretrained('ukr-models/uk-ner')
ner = pipeline('ner', model=model, tokenizer=tokenizer)
ner("Могила Тараса Шевченка — місце поховання видатного українського поета Тараса Шевченка в місті Канів (Черкаська область) на Чернечій горі, над яким із 1939 року височіє бронзовий пам'ятник роботи скульптора Матвія Манізера.")
```
If you wish to get predictions split by words, not by tokens, you may use the following approach (download script get_predictions.py from the repository, it uses [package tokenize_uk](https://pypi.org/project/tokenize_uk/) for splitting)
```py
from transformers import AutoTokenizer, AutoModelForTokenClassification
from get_predictions import get_word_predictions
tokenizer = AutoTokenizer.from_pretrained('ukr-models/uk-ner')
model = AutoModelForTokenClassification.from_pretrained('ukr-models/uk-ner')
get_word_predictions(model, tokenizer, ["Могила Тараса Шевченка — місце поховання видатного українського поета Тараса Шевченка в місті Канів (Черкаська область) на Чернечій горі, над яким із 1939 року височіє бронзовий пам'ятник роботи скульптора Матвія Манізера."])
```
|
ukr-models/uk-morph
|
ukr-models
| 2023-08-31T09:41:07Z | 124 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"token-classification",
"ukrainian",
"uk",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-04-08T07:14:02Z |
---
language:
- uk
tags:
- ukrainian
widget:
- text: "Могила Тараса Шевченка — місце поховання видатного українського поета Тараса Шевченка в місті Канів (Черкаська область) на Чернечій горі, над яким із 1939 року височіє бронзовий пам'ятник роботи скульптора Матвія Манізера."
license: mit
---
## Model Description
Fine-tuning of [XLM-RoBERTa-Uk](https://huggingface.co/ukr-models/xlm-roberta-base-uk) model on [synthetic morphological dataset](https://huggingface.co/datasets/ukr-models/Ukr-Synth), returns both UPOS and morphological features (joined by double underscore symbol)
## How to Use
Huggingface pipeline way (returns tokens with labels):
```py
from transformers import TokenClassificationPipeline, AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained('ukr-models/uk-morph')
model = AutoModelForTokenClassification.from_pretrained('ukr-models/uk-morph')
ppln = TokenClassificationPipeline(model=model, tokenizer=tokenizer)
ppln("Могила Тараса Шевченка — місце поховання видатного українського поета Тараса Шевченка в місті Канів (Черкаська область) на Чернечій горі, над яким із 1939 року височіє бронзовий пам'ятник роботи скульптора Матвія Манізера.")
```
If you wish to get predictions split by words, not by tokens, you may use the following approach (download script get_predictions.py from the repository, it uses [package tokenize_uk](https://pypi.org/project/tokenize_uk/) for splitting)
```py
from transformers import AutoTokenizer, AutoModelForTokenClassification
from get_predictions import get_word_predictions
tokenizer = AutoTokenizer.from_pretrained('ukr-models/uk-morph')
model = AutoModelForTokenClassification.from_pretrained('ukr-models/uk-morph')
get_word_predictions(model, tokenizer, ["Могила Тараса Шевченка — місце поховання видатного українського поета Тараса Шевченка в місті Канів (Черкаська область) на Чернечій горі, над яким із 1939 року височіє бронзовий пам'ятник роботи скульптора Матвія Манізера."])
```
|
ukr-models/uk-punctcase
|
ukr-models
| 2023-08-31T09:40:36Z | 118 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"token-classification",
"ukrainian",
"uk",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-13T11:50:18Z |
---
language:
- uk
tags:
- ukrainian
widget:
- text: "упродовж 2012-2014 років національний природний парк «зачарований край» разом із всесвітнім фондом природи wwf успішно реалізували проект із відновлення болота «чорне багно» розташованого на схилах гори бужора у закарпатті водноболотне угіддя «чорне багно» є найбільшою болотною екосистемою регіону воно займає площу близько 15 га унікальністю цього високогірного болота розташованого на висоті 840 м над рівнем моря є велика потужність торфових покладів (глибиною до 59 м) і своєрідна рослинність у 50-х і на початку 60-х років минулого століття на природних потічках що протікали через болото побудували осушувальні канали це порушило природну рівновагу відтак змінилася екосистема болота"
license: mit
---
## Model Description
Fine-tuning of [XLM-RoBERTa-Uk](https://huggingface.co/ukr-models/xlm-roberta-base-uk) model on Ukrainian texts to recover punctuation and case.
## How to Use
Download script get_predictions.py from the repository.
```py
from transformers import AutoTokenizer, AutoModelForTokenClassification
from get_predictions import recover_text
tokenizer = AutoTokenizer.from_pretrained('ukr-models/uk-punctcase')
model = AutoModelForTokenClassification.from_pretrained('ukr-models/uk-punctcase')
text = "..."
recover_text(text_processed, model, tokenizer)
```
|
ukr-models/uk-summarizer
|
ukr-models
| 2023-08-31T09:40:08Z | 132 | 4 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"ukrainian",
"uk",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-29T13:21:16Z |
---
language:
- uk
tags:
- ukrainian
license: mit
---
## Model Description
Fine-tuning of [uk-mt5-base](https://huggingface.co/kravchenko/uk-mt5-base) model on summarization dataset.
## How to Use
```py
from transformers import AutoTokenizer, T5ForConditionalGeneration, pipeline
tokenizer = AutoTokenizer.from_pretrained('ukr-models/uk-summarizer')
model = T5ForConditionalGeneration.from_pretrained('ukr-models/uk-summarizer')
ppln = pipeline("summarization", model=model, tokenizer=tokenizer, device=0, max_length=128, num_beams=4, no_repeat_ngram_size=2, clean_up_tokenization_spaces=True)
text = "..."
ppln(text)
```
|
UholoDala/sentence_sentiments_analysis_roberta
|
UholoDala
| 2023-08-31T09:39:09Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T06:42:32Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: sentence_sentiments_analysis_roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentence_sentiments_analysis_roberta
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2736
- F1-score: 0.9119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1-score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3477 | 1.0 | 2500 | 0.3307 | 0.9112 |
| 0.2345 | 2.0 | 5000 | 0.2736 | 0.9119 |
| 0.175 | 3.0 | 7500 | 0.3625 | 0.9161 |
| 0.1064 | 4.0 | 10000 | 0.3272 | 0.9358 |
| 0.07 | 5.0 | 12500 | 0.3291 | 0.9380 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
AndrewL088/Pixelcopter-2
|
AndrewL088
| 2023-08-31T09:38:36Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-31T09:38:31Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 36.10 +/- 22.06
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Datactive/BERT_pap_queries_classification_2
|
Datactive
| 2023-08-31T09:37:52Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-29T20:36:30Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Datactive/BERT_pap_queries_classification_2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Datactive/BERT_pap_queries_classification_2
This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1558
- Validation Loss: 0.1369
- Train F1: 0.9475
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1463, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train F1 | Epoch |
|:----------:|:---------------:|:--------:|:-----:|
| 0.1558 | 0.1369 | 0.9475 | 0 |
### Framework versions
- Transformers 4.29.0.dev0
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
ardt-multipart/ardt-multipart-ppo_train_hopper_level-3108_0919-66
|
ardt-multipart
| 2023-08-31T09:36:54Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-31T08:21:10Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-multipart-ppo_train_hopper_level-3108_0919-66
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-multipart-ppo_train_hopper_level-3108_0919-66
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
lomahony/eleuther-pythia12b-hh-sft
|
lomahony
| 2023-08-31T09:34:04Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"en",
"dataset:Anthropic/hh-rlhf",
"arxiv:2101.00027",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-25T10:52:10Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- Anthropic/hh-rlhf
---
[Pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) supervised finetuned with [Anthropic-hh-rlhf dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf) for 1 epoch.
[wandb log](https://wandb.ai/pythia_dpo/Pythia_LOM/runs/hdct406x)
Benchmark evaluations included in repo done using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/big-refactor).
See [Pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) for model details [(paper)](https://arxiv.org/abs/2101.00027).
|
AK-12/my_awesome_model
|
AK-12
| 2023-08-31T09:33:02Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-27T10:46:57Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.9475
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3616
- Accuracy: 0.9475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2804 | 1.0 | 100 | 0.3327 | 0.9475 |
| 0.4089 | 2.0 | 200 | 0.3448 | 0.955 |
| 0.0564 | 3.0 | 300 | 0.3446 | 0.95 |
| 0.0 | 4.0 | 400 | 0.3417 | 0.9475 |
| 0.0 | 5.0 | 500 | 0.3616 | 0.9475 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
aviroes/MAScIR_elderly_whisper-medium-LoRA
|
aviroes
| 2023-08-31T09:31:02Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"region:us"
] | null | 2023-08-31T07:02:39Z |
---
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
model-index:
- name: MAScIR_elderly_whisper-medium-LoRA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MAScIR_elderly_whisper-medium-LoRA
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0224
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3209 | 0.19 | 100 | 0.3262 |
| 0.2482 | 0.37 | 200 | 0.3101 |
| 0.2726 | 0.56 | 300 | 0.3030 |
| 0.2288 | 0.74 | 400 | 0.2848 |
| 0.2014 | 0.93 | 500 | 0.2586 |
| 0.1277 | 1.11 | 600 | 0.2098 |
| 0.1054 | 1.3 | 700 | 0.1857 |
| 0.1056 | 1.48 | 800 | 0.1449 |
| 0.0842 | 1.67 | 900 | 0.1069 |
| 0.0692 | 1.85 | 1000 | 0.0874 |
| 0.0314 | 2.04 | 1100 | 0.0628 |
| 0.0265 | 2.22 | 1200 | 0.0515 |
| 0.0154 | 2.41 | 1300 | 0.0443 |
| 0.0127 | 2.59 | 1400 | 0.0382 |
| 0.0237 | 2.78 | 1500 | 0.0290 |
| 0.0119 | 2.96 | 1600 | 0.0224 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Dala/mlc-chat-vicuna-13b-v1.5
|
Dala
| 2023-08-31T09:23:54Z | 0 | 1 | null |
[
"license:llama2",
"region:us"
] | null | 2023-08-25T17:42:24Z |
---
inference: false
license: llama2
model_type: llama
model_creator: lmsys
model_link: https://huggingface.co/lmsys/vicuna-13b-v1.5
model_name: Vicuna 13B v1.5
quantized_by: Dala
---
# Vicuna 13B v1.5 - MLC
- Model creator: [lmsys](https://huggingface.co/lmsys)
- Original model: [Vicuna 13B v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5)
## Description
This repo contains the [MLC](https://mlc.ai/mlc-llm/) compiled parameters for [lmsys's Vicuna 13B v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5).
It contains several quantizations, each in its own branch:
- main (q4f16_1) <-- You are currently on this branch
- q4f16_2
- q8f16_1
- autogptq_llama_q4f16_1
To run the model, please check out the [MLC instructions](https://mlc.ai/mlc-llm/docs/get_started/try_out.html).
In case the model libraries are not yet available in the [binary lib srepo](https://github.com/mlc-ai/binary-mlc-llm-libs), please obtain them from [this PR](https://github.com/mlc-ai/binary-mlc-llm-libs/pull/15/files)
|
dt-and-vanilla-ardt/ardt-vanilla-ppo_train_halfcheetah_level-3108_0816-99
|
dt-and-vanilla-ardt
| 2023-08-31T09:23:43Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-31T07:18:16Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-vanilla-ppo_train_halfcheetah_level-3108_0816-99
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-vanilla-ppo_train_halfcheetah_level-3108_0816-99
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
pvcodes/comment_toxicity_classifier
|
pvcodes
| 2023-08-31T09:22:36Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-08-28T14:03:39Z |
---
license: mit
---
<h1 align=center>Comment Toxicity Classification</h1>
This model helps to predict the is a comment/sentence is hateful on various parameters such as toxicity, severe toxicity, obscene, threat, insult and racism.
### Test the model here : <a href="https://huggingface.co/spaces/pvcodes/comment_toxicity_classifier">pvcodes/comment_toxicity_classifier</a>
<br>
## Working of the Model
- #### Loading of Data
The data is fetched from <a href='assets/jigsaw_toxic_challenge/train.csv/train.csv'>csv</a> file, which consist of the comment and attributes such as toxicity, severe toxicity, obscene, threat, insult and racism.
- #### Preprocessing the comments
Then the data is tokenized using the `TextVectorization` method of `keras` in and embeded
- #### Creating of <emp>Deep NLP Model</emp>
For this model we used `Keras sequential API` with a number of `LSTM` layers (because they are particulary good while working with sequences)
- ####
- The dataset used to train the model is from <a href=https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge>Toxic Comment Classification Challenge</a> from <a href=https://www.kaggle.com>Kaggle</a>.
##### Note: The compiled data model is available here: <a href='assets/toxicity.h5'>here</a>.
<samp>
<p align="center">
════ ⋆★⋆ ════<br>
From <a href="https://github.com/pvcodes/pvcodes">pvcodes</a>
</p>
</samp>
|
moro01525/mlm
|
moro01525
| 2023-08-31T09:16:14Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:moro01525/mlm",
"base_model:finetune:moro01525/mlm",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-08-29T09:25:43Z |
---
base_model: moro01525/mlm
tags:
- generated_from_trainer
model-index:
- name: mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mlm
This model is a fine-tuned version of [moro01525/mlm](https://huggingface.co/moro01525/mlm) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9361
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.248 | 1.0 | 582 | 4.9240 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Norod78/SDXL-StickerSheet-Lora
|
Norod78
| 2023-08-31T09:12:04Z | 248 | 33 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-31T09:04:29Z |
---
license: mit
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: StickerSheet
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
widget:
- text: Cute sparkle pink barbie StickerSheet
- text: Cthulhu StickerSheet based on H.P Lovecraft stories
- text: Cute sparkle rainbow kitten StickerSheet, Eric Wallis
- text: Cute socially awkward potato StickerSheet
inference: true
language:
- en
---
# Trigger words
Use "StickerSheet" in your prompts
# Examples
Cute sparkle pink barbie StickerSheet, Very detailed, clean, high quality, sharp image, Eric Wallis

Cthulhu StickerSheet, based on H.P Lovecraft stories, Very detailed, clean, high quality, sharp image

|
iloya/Taxi-v3
|
iloya
| 2023-08-31T09:08:28Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-31T09:08:25Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="iloya/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Hozier/sd-class-butterflies-32
|
Hozier
| 2023-08-31T08:57:55Z | 44 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-08-31T08:53:04Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Hozier/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
kimi0230/TestModel
|
kimi0230
| 2023-08-31T08:56:33Z | 1 | 0 | null |
[
"tf",
"generated_from_keras_callback",
"dataset:fka/awesome-chatgpt-prompts",
"license:mit",
"region:us"
] | null | 2023-08-31T07:44:10Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: chatgpt-gpt4-prompts-bart-large-cnn-samsum
results: []
datasets:
- fka/awesome-chatgpt-prompts
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# chatgpt-gpt4-prompts-bart-large-cnn-samsum
This model generates ChatGPT/BingChat & GPT-3 prompts and is a fine-tuned version of [philschmid/bart-large-cnn-samsum](https://huggingface.co/philschmid/bart-large-cnn-samsum) on an [this](https://huggingface.co/datasets/fka/awesome-chatgpt-prompts) dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2214
- Validation Loss: 2.7584
- Epoch: 4
### Streamlit
This model supports a [Streamlit](https://streamlit.io/) Web UI to run the chatgpt-gpt4-prompts-bart-large-cnn-samsum model:
[](https://huggingface.co/spaces/Kaludi/ChatGPT-BingChat-GPT3-Prompt-Generator_App)
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.1982 | 2.6801 | 0 |
| 2.3601 | 2.5493 | 1 |
| 1.9225 | 2.5377 | 2 |
| 1.5465 | 2.6794 | 3 |
| 1.2214 | 2.7584 | 4 |
### Framework versions
- Transformers 4.27.3
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
iloya/q-FrozenLake-v1-4x4-noSlippery
|
iloya
| 2023-08-31T08:55:45Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-31T08:55:42Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="iloya/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Alexa06/54yg
|
Alexa06
| 2023-08-31T08:53:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-31T08:52:14Z |
photo_5154810918662679307_x.jpg
|
vnktrmnb/MBERT_FT-TyDiQA_S59
|
vnktrmnb
| 2023-08-31T08:34:15Z | 65 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"bert",
"question-answering",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-30T05:18:51Z |
---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_keras_callback
model-index:
- name: vnktrmnb/MBERT_FT-TyDiQA_S59
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vnktrmnb/MBERT_FT-TyDiQA_S59
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6175
- Train End Logits Accuracy: 0.8417
- Train Start Logits Accuracy: 0.8693
- Validation Loss: 0.4662
- Validation End Logits Accuracy: 0.8789
- Validation Start Logits Accuracy: 0.9162
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2412, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.4412 | 0.6715 | 0.7002 | 0.4875 | 0.8570 | 0.8943 | 0 |
| 0.8493 | 0.7898 | 0.8229 | 0.4547 | 0.8686 | 0.9137 | 1 |
| 0.6175 | 0.8417 | 0.8693 | 0.4662 | 0.8789 | 0.9162 | 2 |
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
parksuna/xlm-roberta-base-finetuned-panx-de
|
parksuna
| 2023-08-31T08:29:59Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-31T08:25:49Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8657241810026685
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1338
- F1: 0.8657
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.257 | 1.0 | 525 | 0.1557 | 0.8218 |
| 0.126 | 2.0 | 1050 | 0.1460 | 0.8521 |
| 0.0827 | 3.0 | 1575 | 0.1338 | 0.8657 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
vnktrmnb/MBERT_FT-TyDiQA_S531
|
vnktrmnb
| 2023-08-31T08:22:15Z | 65 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"bert",
"question-answering",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-31T07:27:40Z |
---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_keras_callback
model-index:
- name: vnktrmnb/MBERT_FT-TyDiQA_S531
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vnktrmnb/MBERT_FT-TyDiQA_S531
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6202
- Train End Logits Accuracy: 0.8376
- Train Start Logits Accuracy: 0.8661
- Validation Loss: 0.4939
- Validation End Logits Accuracy: 0.8647
- Validation Start Logits Accuracy: 0.9046
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2412, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.4876 | 0.6535 | 0.6831 | 0.5669 | 0.8222 | 0.8698 | 0 |
| 0.8473 | 0.7841 | 0.8173 | 0.4769 | 0.8647 | 0.9059 | 1 |
| 0.6202 | 0.8376 | 0.8661 | 0.4939 | 0.8647 | 0.9046 | 2 |
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
emotibot-inc/Zhuhai-13B
|
emotibot-inc
| 2023-08-31T08:13:01Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-30T09:33:05Z |
# README
# Zhuhai-13B
[Hugging Face](https://huggingface.co/emotibot-inc/Zhuhai-13B) | [GitHub](https://github.com/emotibot-inc/Zhuhai-13B) | [Model Scope](https://modelscope.cn/models/emotibotinc/Zhuhai-13B/summary) | [Emotibrain](https://brain.emotibot.com/?source=zhuhai13b_huggingface)
# **模型介绍**
"竹海-13B"是竹间智能继“竹海-7B”之后开发的一款拥大模语言模型,以下是“竹海-13B”的四个主要特点:
- 更大尺寸、更多数据:相比于“竹海-7B”,我们将参数量扩大到130亿,并在高质量语料上训练了1.2万亿tokens。Zhuhai-13B的上下文窗口长度为4096。
- 高效性能:基于Transformer结构,在大约1.2万亿tokens上训练出来的130亿参数模型,支持中英双语。
- 安全性:我们对“竹海-13B”进行了严格的安全控制和优化,确保其在实际应用中不会产生任何不适当或误导性的输出。通过精心设计和调整算法参数,“竹海-13B”可以有效地避免乱说话现象。
# Model **benchmark**
## **中文评测** - **CMMLU**
### Result
| Model 5-shot | STEM | Humanities | Social Science | Other | China-specific | Average |
| --- | --- | --- | --- | --- | --- | --- |
| Multilingual-oriented | | | | | | |
| [GPT4](https://openai.com/gpt4) | 65.23 | 72.11 | 72.06 | 74.79 | 66.12 | 70.95 |
| [ChatGPT](https://openai.com/chatgpt) | 47.81 | 55.68 | 56.50 | 62.66 | 50.69 | 55.51 |
| [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) | 33.33 | 43.46 | 44.28 | 44.75 | 39.46 | 41.45 |
| [LLaMA-65B](https://github.com/facebookresearch/llama) | 34.47 | 40.24 | 41.55 | 42.88 | 37.00 | 39.80 |
| [BLOOMZ-7B](https://github.com/bigscience-workshop/xmtf) | 30.56 | 39.10 | 38.59 | 40.32 | 37.15 | 37.04 |
| [Bactrian-LLaMA-13B](https://github.com/mbzuai-nlp/bactrian-x) | 27.52 | 32.47 | 32.27 | 35.77 | 31.56 | 31.88 |
| Chinese-oriented | | | | | | |
| [Zhuzhi-6B](https://github.com/emotibot-inc/Zhuzhi-6B) | 40.30 | 48.08 | 46.72 | 47.41 | 45.51 | 45.60 |
| [Zhuhai-13B](https://github.com/emotibot-inc/Zhuhai-13B) | 42.39 | 61.57 | 60.48 | 58.57 | 55.68 | 55.74 |
| [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) | 42.38 | 61.61 | 60.44 | 59.26 | 56.62 | 55.82 |
| [ChatGLM2-6B](https://huggingface.co/THUDM/chatglm2-6b) | 42.55 | 50.98 | 50.99 | 50.80 | 48.37 | 48.80 |
| [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) | 35.25 | 48.07 | 47.88 | 46.61 | 44.14 | 44.43 |
| [ChatGLM-6B](https://github.com/THUDM/GLM-130B) | 32.35 | 39.22 | 39.65 | 38.62 | 37.70 | 37.48 |
| [BatGPT-15B](https://github.com/haonan-li/CMMLU/blob/master) | 34.96 | 35.45 | 36.31 | 42.14 | 37.89 | 37.16 |
| [Chinese-LLaMA-13B](https://github.com/ymcui/Chinese-LLaMA-Alpaca) | 27.12 | 33.18 | 34.87 | 35.10 | 32.97 | 32.63 |
| [MOSS-SFT-16B](https://github.com/OpenLMLab/MOSS) | 27.23 | 30.41 | 28.84 | 32.56 | 28.68 | 29.57 |
| [Chinese-GLM-10B](https://github.com/THUDM/GLM) | 25.49 | 27.05 | 27.42 | 29.21 | 28.05 | 27.26 |
| Random | 25.00 | 25.00 | 25.00 | 25.00 | 25.00 | 25.00 |
| Model 0-shot | STEM | Humanities | Social Science | Other | China-specific | Average |
| --- | --- | --- | --- | --- | --- | --- |
| Multilingual-oriented | | | | | | |
| [GPT4](https://openai.com/gpt4) | 63.16 | 69.19 | 70.26 | 73.16 | 63.47 | 68.9 |
| [ChatGPT](https://openai.com/chatgpt) | 44.8 | 53.61 | 54.22 | 59.95 | 49.74 | 53.22 |
| [BLOOMZ-7B](https://github.com/bigscience-workshop/xmtf) | 33.03 | 45.74 | 45.74 | 46.25 | 41.58 | 42.8 |
| [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) | 31.11 | 41.3 | 40.87 | 40.61 | 36.05 | 38.5 |
| [LLaMA-65B](https://github.com/facebookresearch/llama) | 31.09 | 34.45 | 36.05 | 37.94 | 32.89 | 34.88 |
| [Bactrian-LLaMA-13B](https://github.com/mbzuai-nlp/bactrian-x) | 26.46 | 29.36 | 31.81 | 31.55 | 29.17 | 30.06 |
| Chinese-oriented | | | | | | |
| [Zhuzhi-6B](https://github.com/emotibot-inc/Zhuzhi-6B) | 42.51 | 48.91 | 48.85 | 50.25 | 47.57 | 47.62 |
| [Zhuhai-13B](https://github.com/emotibot-inc/Zhuhai-13B) | 42.37 | 60.97 | 59.71 | 56.35 | 54.81 | 54.84 |
| [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) | 42.04 | 60.49 | 59.55 | 56.6 | 55.72 | 54.63 |
| [ChatGLM2-6B](https://huggingface.co/THUDM/chatglm2-6b) | 41.28 | 52.85 | 53.37 | 52.24 | 50.58 | 49.95 |
| [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) | 32.79 | 44.43 | 46.78 | 44.79 | 43.11 | 42.33 |
| [ChatGLM-6B](https://github.com/THUDM/GLM-130B) | 32.22 | 42.91 | 44.81 | 42.6 | 41.93 | 40.79 |
| [BatGPT-15B](https://github.com/haonan-li/CMMLU/blob/master) | 33.72 | 36.53 | 38.07 | 46.94 | 38.32 | 38.51 |
| [Chinese-LLaMA-13B](https://github.com/ymcui/Chinese-LLaMA-Alpaca) | 26.76 | 26.57 | 27.42 | 28.33 | 26.73 | 27.34 |
| [MOSS-SFT-16B](https://github.com/OpenLMLab/MOSS) | 25.68 | 26.35 | 27.21 | 27.92 | 26.7 | 26.88 |
| [Chinese-GLM-10B](https://github.com/THUDM/GLM) | 25.57 | 25.01 | 26.33 | 25.94 | 25.81 | 25.8 |
| Random | 25 | 25 | 25 | 25 | 25 | 25 |
# **推理对话**
您可以直接注册并登录竹间智能科技发布的大模型产品 [Emotibrain](https://brain.emotibot.com/?source=zhuhai13b_huggingface),并选择 **CoPilot**(**KKBot**) 进行的在线测试,注册即可立即使用;

# **模型训练**
您可以直接注册并登录竹间智能科技发布的大模型产品 [Emotibrain](https://brain.emotibot.com/?source=zhuhai13b_huggingface),并选择 Fine-tune 进行 **0 代码微调**,注册即可立即使用;
详细的训练流程您可以浏览此文档:[Emotibrain 快速入门](https://brain.emotibot.com/supports/model-factory/dash-into.html)(大约 5 分钟)


# **更多信息**
若您想了解更多 大模型训练平台 的相关信息,请访问 [Emotibrain 官网](https://brain.emotibot.com/?source=zhuhai13b_huggingface) 进行了解;
|
Geotrend/bert-base-en-fr-ar-cased
|
Geotrend
| 2023-08-31T08:03:30Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-fr-ar-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-ar-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-ar-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Geotrend/bert-base-en-pt-cased
|
Geotrend
| 2023-08-31T08:03:02Z | 116 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"en",
"pt",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language:
- multilingual
- en
- pt
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-pt-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-pt-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-pt-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
ThanhMai/green-clip-inpaint
|
ThanhMai
| 2023-08-31T08:01:58Z | 21 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-31T08:01:05Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
---
### green clip inpaint on Stable Diffusion via Dreambooth
#### model by ThanhMai
This your the Stable Diffusion model fine-tuned the green clip inpaint concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **<green-clip> clip**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:







|
SCUT-DLVCLab/lilt-roberta-en-base
|
SCUT-DLVCLab
| 2023-08-31T07:59:36Z | 19,575 | 18 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"lilt",
"feature-extraction",
"vision",
"arxiv:2202.13669",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-09-29T14:06:32Z |
---
license: mit
tags:
- vision
---
# LiLT-RoBERTa (base-sized model)
Language-Independent Layout Transformer - RoBERTa model by stitching a pre-trained RoBERTa (English) and a pre-trained Language-Independent Layout Transformer (LiLT) together. It was introduced in the paper [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Wang et al. and first released in [this repository](https://github.com/jpwang/lilt).
Disclaimer: The team releasing LiLT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Language-Independent Layout Transformer (LiLT) allows to combine any pre-trained RoBERTa encoder from the hub (hence, in any language) with a lightweight Layout Transformer to have a LayoutLM-like model for any language.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/lilt_architecture.jpg" alt="drawing" width="600"/>
## Intended uses & limitations
The model is meant to be fine-tuned on tasks like document image classification, document parsing and document QA. See the [model hub](https://huggingface.co/models?search=lilt) to look for fine-tuned versions on a task that interests you.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/lilt.html).
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2202.13669,
doi = {10.48550/ARXIV.2202.13669},
url = {https://arxiv.org/abs/2202.13669},
author = {Wang, Jiapeng and Jin, Lianwen and Ding, Kai},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
yuekai/model_repo_whisper_large_v2
|
yuekai
| 2023-08-31T07:57:15Z | 0 | 0 | null |
[
"onnx",
"region:us"
] | null | 2023-08-17T09:53:49Z |
### Client
https://huggingface.co/spaces/yuekai/triton-asr-client
https://github.com/yuekaizhang/Triton-ASR-Client
### Server
```sh
docker pull soar97/triton-whisper:23.06
docker run -it --name "whisper-server" --gpus all --net host -v $your_mount_dir --shm-size=2g soar97/triton-whisper:23.06
apt-get install git-lfs
git-lfs install
git clone https://huggingface.co/yuekai/model_repo_whisper_large_v2.git
export CUDA_VISIBLE_DEVICES="1"
model_repo_path=./model_repo_whisper
tritonserver --model-repository $model_repo_path \
--pinned-memory-pool-byte-size=2048000000 \
--cuda-memory-pool-byte-size=0:4096000000 \
--http-port 10086 \
--metrics-port 10087
```
### Benchmark Results
Decoding on a single V100 GPU, audios are padding to 30s, using aishell1 test set files
| Model | Backend | Concurrency | RTF |
|-------|-----------|-----------------------|---------|
| Large-v2 | ONNX FP16 | 4 | 0.14 |
|Module| Time Distribution|
|--|--|
|feature_extractor|0.8%|
|encoder|9.6%|
|decoder|67.4%|
|greedy search|22.2%|
|
victornica/mini_molformer_gsf_6epochs
|
victornica
| 2023-08-31T07:56:29Z | 153 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-30T22:56:12Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: mini_molformer_gsf_6epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mini_molformer_gsf_6epochs
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.7953 | 0.1 | 1000 | 1.0871 |
| 1.0284 | 0.19 | 2000 | 0.9575 |
| 0.9463 | 0.29 | 3000 | 0.9099 |
| 0.9048 | 0.39 | 4000 | 0.8758 |
| 0.877 | 0.48 | 5000 | 0.8517 |
| 0.8573 | 0.58 | 6000 | 0.8323 |
| 0.8399 | 0.68 | 7000 | 0.8176 |
| 0.8276 | 0.77 | 8000 | 0.8127 |
| 0.8164 | 0.87 | 9000 | 0.8037 |
| 0.8071 | 0.97 | 10000 | 0.7889 |
| 0.7969 | 1.07 | 11000 | 0.7815 |
| 0.7901 | 1.16 | 12000 | 0.7742 |
| 0.7844 | 1.26 | 13000 | 0.7710 |
| 0.778 | 1.36 | 14000 | 0.7633 |
| 0.7732 | 1.45 | 15000 | 0.7605 |
| 0.7695 | 1.55 | 16000 | 0.7567 |
| 0.7646 | 1.65 | 17000 | 0.7486 |
| 0.7606 | 1.74 | 18000 | 0.7462 |
| 0.7576 | 1.84 | 19000 | 0.7434 |
| 0.7539 | 1.94 | 20000 | 0.7376 |
| 0.7484 | 2.03 | 21000 | 0.7343 |
| 0.7423 | 2.13 | 22000 | 0.7318 |
| 0.7403 | 2.23 | 23000 | 0.7270 |
| 0.7364 | 2.32 | 24000 | 0.7274 |
| 0.7341 | 2.42 | 25000 | 0.7206 |
| 0.7321 | 2.52 | 26000 | 0.7204 |
| 0.728 | 2.61 | 27000 | 0.7152 |
| 0.7253 | 2.71 | 28000 | 0.7131 |
| 0.7224 | 2.81 | 29000 | 0.7099 |
| 0.7198 | 2.91 | 30000 | 0.7073 |
| 0.7166 | 3.0 | 31000 | 0.7039 |
| 0.7079 | 3.1 | 32000 | 0.7009 |
| 0.7074 | 3.2 | 33000 | 0.6980 |
| 0.7051 | 3.29 | 34000 | 0.6951 |
| 0.703 | 3.39 | 35000 | 0.6924 |
| 0.7008 | 3.49 | 36000 | 0.6895 |
| 0.6971 | 3.58 | 37000 | 0.6873 |
| 0.6943 | 3.68 | 38000 | 0.6854 |
| 0.6931 | 3.78 | 39000 | 0.6814 |
| 0.6899 | 3.87 | 40000 | 0.6799 |
| 0.6874 | 3.97 | 41000 | 0.6770 |
| 0.6805 | 4.07 | 42000 | 0.6740 |
| 0.6762 | 4.16 | 43000 | 0.6722 |
| 0.6753 | 4.26 | 44000 | 0.6689 |
| 0.6721 | 4.36 | 45000 | 0.6668 |
| 0.671 | 4.45 | 46000 | 0.6643 |
| 0.6686 | 4.55 | 47000 | 0.6627 |
| 0.6664 | 4.65 | 48000 | 0.6604 |
| 0.6654 | 4.75 | 49000 | 0.6581 |
| 0.6635 | 4.84 | 50000 | 0.6565 |
| 0.6617 | 4.94 | 51000 | 0.6548 |
| 0.6577 | 5.04 | 52000 | 0.6532 |
| 0.6527 | 5.13 | 53000 | 0.6522 |
| 0.6514 | 5.23 | 54000 | 0.6508 |
| 0.6501 | 5.33 | 55000 | 0.6498 |
| 0.6494 | 5.42 | 56000 | 0.6489 |
| 0.6484 | 5.52 | 57000 | 0.6483 |
| 0.6484 | 5.62 | 58000 | 0.6477 |
| 0.6474 | 5.71 | 59000 | 0.6473 |
| 0.6478 | 5.81 | 60000 | 0.6471 |
| 0.6474 | 5.91 | 61000 | 0.6470 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Edmon02/marian-finetuned-kde4-en-to-hy
|
Edmon02
| 2023-08-31T07:54:27Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:opus100",
"base_model:Helsinki-NLP/opus-mt-en-hy",
"base_model:finetune:Helsinki-NLP/opus-mt-en-hy",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-08-31T07:51:18Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-hy
tags:
- translation
- generated_from_trainer
datasets:
- opus100
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-hy
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus100
type: opus100
config: en-hy
split: train
args: en-hy
metrics:
- name: Bleu
type: bleu
value: 18.363987489312905
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-hy
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-hy](https://huggingface.co/Helsinki-NLP/opus-mt-en-hy) on the opus100 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4183
- Bleu: 18.3640
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
gillankrishna/ppo-LunarLander
|
gillankrishna
| 2023-08-31T07:54:09Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-31T07:53:51Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 238.64 +/- 59.44
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
redstonehero/arteyou_alpha1
|
redstonehero
| 2023-08-31T07:52:33Z | 21 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-31T07:12:02Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
redstonehero/horridhentaimix_v10
|
redstonehero
| 2023-08-31T07:52:30Z | 21 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-31T07:13:38Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
redstonehero/mistoonamethyst_v20
|
redstonehero
| 2023-08-31T07:52:29Z | 19 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-31T07:13:29Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
DopeorNope/A3_duck
|
DopeorNope
| 2023-08-31T07:37:42Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-31T07:35:36Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
sosuneko/Reinforce-CartPole-v1
|
sosuneko
| 2023-08-31T07:34:05Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-31T07:33:55Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
SasankVH/sample_data
|
SasankVH
| 2023-08-31T07:32:04Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"en",
"dataset:localdataset",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-29T07:50:13Z |
---
language:
- en
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- localdataset
metrics:
- wer
model-index:
- name: testing
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: localdataset
type: localdataset
config: default
split: test
args: 'config: data, split: test'
metrics:
- name: Wer
type: wer
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# testing
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the localdataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Wer: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 62
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 0.0 | 125.0 | 125 | 0.0000 | 0.0 |
| 0.0 | 250.0 | 250 | 0.0000 | 0.0 |
| 0.0 | 375.0 | 375 | 0.0000 | 0.0 |
| 0.0 | 500.0 | 500 | 0.0000 | 0.0 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
rossevine/Model_G_2
|
rossevine
| 2023-08-31T07:29:43Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-29T13:06:01Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- generated_from_trainer
datasets:
- common_voice
metrics:
- wer
model-index:
- name: Model_G_2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice
type: common_voice
config: id
split: test
args: id
metrics:
- name: Wer
type: wer
value: 0.251258623904531
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Model_G_2
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3710
- Wer: 0.2513
- Cer: 0.0631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 3.7484 | 3.23 | 400 | 0.5706 | 0.5698 | 0.1477 |
| 0.3419 | 6.45 | 800 | 0.4120 | 0.3758 | 0.0924 |
| 0.1796 | 9.68 | 1200 | 0.3691 | 0.3295 | 0.0843 |
| 0.125 | 12.9 | 1600 | 0.3821 | 0.3097 | 0.0782 |
| 0.0984 | 16.13 | 2000 | 0.4085 | 0.2947 | 0.0742 |
| 0.0827 | 19.35 | 2400 | 0.3859 | 0.2781 | 0.0711 |
| 0.0666 | 22.58 | 2800 | 0.3813 | 0.2663 | 0.0684 |
| 0.0558 | 25.81 | 3200 | 0.3681 | 0.2545 | 0.0644 |
| 0.0466 | 29.03 | 3600 | 0.3710 | 0.2513 | 0.0631 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 1.18.3
- Tokenizers 0.13.3
|
ardt-multipart/ardt-multipart-ppo_train_halfcheetah_level-3108_0610-66
|
ardt-multipart
| 2023-08-31T07:22:42Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-31T05:11:57Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-multipart-ppo_train_halfcheetah_level-3108_0610-66
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-multipart-ppo_train_halfcheetah_level-3108_0610-66
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
vnktrmnb/MBERT_FT-TyDiQA_S431
|
vnktrmnb
| 2023-08-31T07:20:23Z | 78 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"bert",
"question-answering",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-31T06:26:45Z |
---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_keras_callback
model-index:
- name: vnktrmnb/MBERT_FT-TyDiQA_S431
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vnktrmnb/MBERT_FT-TyDiQA_S431
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6089
- Train End Logits Accuracy: 0.8391
- Train Start Logits Accuracy: 0.8668
- Validation Loss: 0.5017
- Validation End Logits Accuracy: 0.8608
- Validation Start Logits Accuracy: 0.9085
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2412, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.4634 | 0.6632 | 0.6911 | 0.5058 | 0.8325 | 0.8982 | 0 |
| 0.8321 | 0.7907 | 0.8249 | 0.4951 | 0.8531 | 0.9085 | 1 |
| 0.6089 | 0.8391 | 0.8668 | 0.5017 | 0.8608 | 0.9085 | 2 |
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dt-and-vanilla-ardt/ardt-vanilla-ppo_train_halfcheetah_level-3108_0607-66
|
dt-and-vanilla-ardt
| 2023-08-31T07:16:25Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-31T05:09:28Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-vanilla-ppo_train_halfcheetah_level-3108_0607-66
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-vanilla-ppo_train_halfcheetah_level-3108_0607-66
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
vnktrmnb/MBERT_FT-TyDiQA_S41
|
vnktrmnb
| 2023-08-31T07:09:59Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"bert",
"question-answering",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-29T09:00:08Z |
---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_keras_callback
model-index:
- name: vnktrmnb/MBERT_FT-TyDiQA_S41
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vnktrmnb/MBERT_FT-TyDiQA_S41
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6256
- Train End Logits Accuracy: 0.8359
- Train Start Logits Accuracy: 0.8649
- Validation Loss: 0.4800
- Validation End Logits Accuracy: 0.8595
- Validation Start Logits Accuracy: 0.8995
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2412, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.4994 | 0.6497 | 0.6777 | 0.4953 | 0.8479 | 0.8982 | 0 |
| 0.8529 | 0.7875 | 0.8176 | 0.4775 | 0.8544 | 0.8892 | 1 |
| 0.6256 | 0.8359 | 0.8649 | 0.4800 | 0.8595 | 0.8995 | 2 |
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
nisten/bigdoc-c13b-instruct-tf32
|
nisten
| 2023-08-31T06:54:29Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-31T06:51:42Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
redstonehero/cyberrealistic_v33_pruned
|
redstonehero
| 2023-08-31T06:53:52Z | 23 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-31T06:08:48Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
redstonehero/revanimatedfp16_122_pruned
|
redstonehero
| 2023-08-31T06:53:35Z | 19 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-31T06:07:35Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
grammarly/pseudonymization-seq2seq
|
grammarly
| 2023-08-31T06:52:22Z | 0 | 5 | null |
[
"text2text-generation",
"en",
"dataset:grammarly/pseudonymization-data",
"dataset:cnn_dailymail",
"dataset:imdb",
"license:apache-2.0",
"region:us"
] |
text2text-generation
| 2023-07-05T18:35:11Z |
---
license: apache-2.0
datasets:
- grammarly/pseudonymization-data
- cnn_dailymail
- imdb
language:
- en
metrics:
- f1
- bleu
pipeline_tag: text2text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This repository contains files for two Seq2Seq transformers-based models used in our paper: https://aclanthology.org/2023.trustnlp-1.20/.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Oleksandr Yermilov, Vipul Raheja, Artem Chernodub
- **Model type:** Seq2Seq
- **Language (NLP):** English
- **License:** Apache license 2.0
- **Finetuned from model:** BART
### Model Sources
- **Paper:** https://aclanthology.org/2023.trustnlp-1.20/
## Uses
These models can be used for anonymizing datasets in English language.
## Bias, Risks, and Limitations
Please check the Limitations section in our paper.
## Training Details
### Training Data
https://huggingface.co/datasets/grammarly/pseudonymization-data/tree/main/seq2seq
### Training Procedure
1. Gather text data from Wikipedia.
2. Preprocess it using NER-based pseudonymization.
3. Fine-tune BART model on translation task for translating text from "original" to "pseudonymized".
#### Training Hyperparameters
We train the models for 3 epochs using `AdamW` optimization with the learning rate α =2*10<sup>5</sup>, and the batch size is 8.
## Evaluation
### Factors & Metrics
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
There is no source truth of named entities for the data, on which this model was trained. We check whether the word is a named entity, using one of the NER systems (spaCy or FLAIR).
#### Metrics
We measure the amount of text, changed by our model. Specifically, we check for the following categories of translated text word by word:
1. True positive (TP) - Named entity, which was changed to another named entity.
2. True negative (TN) - Not a named entity, which was not changed.
3. False positive (FP) - Not a named entity, which was changed to another word.
4. False negative (FN) - Named entity, which was not changed to another named entity.
We calculate F<sub>1</sub> score based on the abovementioned values.
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@inproceedings{yermilov-etal-2023-privacy,
title = "Privacy- and Utility-Preserving {NLP} with Anonymized data: A case study of Pseudonymization",
author = "Yermilov, Oleksandr and
Raheja, Vipul and
Chernodub, Artem",
booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.trustnlp-1.20",
doi = "10.18653/v1/2023.trustnlp-1.20",
pages = "232--241",
abstract = "This work investigates the effectiveness of different pseudonymization techniques, ranging from rule-based substitutions to using pre-trained Large Language Models (LLMs), on a variety of datasets and models used for two widely used NLP tasks: text classification and summarization. Our work provides crucial insights into the gaps between original and anonymized data (focusing on the pseudonymization technique) and model quality and fosters future research into higher-quality anonymization techniques better to balance the trade-offs between data protection and utility preservation. We make our code, pseudonymized datasets, and downstream models publicly available.",
}
```
## Model Card Contact
Oleksandr Yermilov (oleksandr.yermilov@ucu.edu.ua).
|
taufiq-lalokalabs/gpt2-test
|
taufiq-lalokalabs
| 2023-08-31T06:41:59Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-31T06:41:57Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
LarryAIDraw/Aria
|
LarryAIDraw
| 2023-08-31T06:40:41Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-30T05:43:37Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/136374/kurenaino-aria-occulticnine
|
LarryAIDraw/MGCM_eriza
|
LarryAIDraw
| 2023-08-31T06:27:03Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-31T05:05:11Z |
---
license: creativeml-openrail-m
---
|
TrevorJS/CodeLlama-13b-mtg
|
TrevorJS
| 2023-08-31T06:25:31Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-31T06:05:03Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
LarryAIDraw/shizuka_hiratsuka_s2_v2
|
LarryAIDraw
| 2023-08-31T06:22:10Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-31T05:20:17Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/25866/shizuka-hiratsuka-season-1-season-2
|
yangdechuan/codeparrot-ds
|
yangdechuan
| 2023-08-31T06:22:06Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-28T12:33:55Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.2102 | 0.02 | 1000 | 2.7478 |
| 2.359 | 0.03 | 2000 | 2.2031 |
| 2.0974 | 0.05 | 3000 | 1.9751 |
| 1.9383 | 0.06 | 4000 | 1.8321 |
| 1.8346 | 0.08 | 5000 | 1.7406 |
| 1.7547 | 0.09 | 6000 | 1.6731 |
| 1.6994 | 0.11 | 7000 | 1.6212 |
| 1.6632 | 0.12 | 8000 | 1.5842 |
| 1.6237 | 0.14 | 9000 | 1.5506 |
| 1.5986 | 0.15 | 10000 | 1.5247 |
| 1.5749 | 0.17 | 11000 | 1.4994 |
| 1.5466 | 0.18 | 12000 | 1.4783 |
| 1.5254 | 0.2 | 13000 | 1.4579 |
| 1.5085 | 0.21 | 14000 | 1.4420 |
| 1.4884 | 0.23 | 15000 | 1.4235 |
| 1.4842 | 0.25 | 16000 | 1.4088 |
| 1.4618 | 0.26 | 17000 | 1.3957 |
| 1.4479 | 0.28 | 18000 | 1.3825 |
| 1.4376 | 0.29 | 19000 | 1.3716 |
| 1.4225 | 0.31 | 20000 | 1.3583 |
| 1.4151 | 0.32 | 21000 | 1.3476 |
| 1.4021 | 0.34 | 22000 | 1.3359 |
| 1.3956 | 0.35 | 23000 | 1.3245 |
| 1.3839 | 0.37 | 24000 | 1.3159 |
| 1.3741 | 0.38 | 25000 | 1.3060 |
| 1.3635 | 0.4 | 26000 | 1.2950 |
| 1.3491 | 0.41 | 27000 | 1.2844 |
| 1.3462 | 0.43 | 28000 | 1.2760 |
| 1.3317 | 0.44 | 29000 | 1.2676 |
| 1.3249 | 0.46 | 30000 | 1.2584 |
| 1.3164 | 0.48 | 31000 | 1.2486 |
| 1.3055 | 0.49 | 32000 | 1.2406 |
| 1.3006 | 0.51 | 33000 | 1.2327 |
| 1.2906 | 0.52 | 34000 | 1.2225 |
| 1.2821 | 0.54 | 35000 | 1.2135 |
| 1.2677 | 0.55 | 36000 | 1.2068 |
| 1.2562 | 0.57 | 37000 | 1.1981 |
| 1.2541 | 0.58 | 38000 | 1.1896 |
| 1.2377 | 0.6 | 39000 | 1.1814 |
| 1.2346 | 0.61 | 40000 | 1.1726 |
| 1.2251 | 0.63 | 41000 | 1.1647 |
| 1.2175 | 0.64 | 42000 | 1.1575 |
| 1.2112 | 0.66 | 43000 | 1.1486 |
| 1.2021 | 0.67 | 44000 | 1.1410 |
| 1.1888 | 0.69 | 45000 | 1.1339 |
| 1.1939 | 0.71 | 46000 | 1.1259 |
| 1.18 | 0.72 | 47000 | 1.1198 |
| 1.1698 | 0.74 | 48000 | 1.1130 |
| 1.1634 | 0.75 | 49000 | 1.1063 |
| 1.1593 | 0.77 | 50000 | 1.1006 |
| 1.1545 | 0.78 | 51000 | 1.0946 |
| 1.1478 | 0.8 | 52000 | 1.0896 |
| 1.1443 | 0.81 | 53000 | 1.0855 |
| 1.1365 | 0.83 | 54000 | 1.0808 |
| 1.1332 | 0.84 | 55000 | 1.0773 |
| 1.1336 | 0.86 | 56000 | 1.0736 |
| 1.1276 | 0.87 | 57000 | 1.0711 |
| 1.1241 | 0.89 | 58000 | 1.0686 |
| 1.123 | 0.9 | 59000 | 1.0665 |
| 1.1187 | 0.92 | 60000 | 1.0647 |
| 1.1123 | 0.93 | 61000 | 1.0636 |
| 1.1159 | 0.95 | 62000 | 1.0628 |
| 1.1133 | 0.97 | 63000 | 1.0623 |
| 1.1181 | 0.98 | 64000 | 1.0621 |
| 1.1125 | 1.0 | 65000 | 1.0621 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
LarryAIDraw/plymouth
|
LarryAIDraw
| 2023-08-31T06:21:50Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-30T05:44:15Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/135762/hms-plymouth-or-azur-lane
|
DogGoesBark/medical_en_zh_8_29
|
DogGoesBark
| 2023-08-31T06:21:24Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-zh",
"base_model:finetune:Helsinki-NLP/opus-mt-en-zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-29T14:51:16Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-zh
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: medical_en_zh_8_29
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medical_en_zh_8_29
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6159
- Bleu: 41.4839
- Gen Len: 77.4048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.5915 | 1.02 | 3000 | 1.4640 | 30.8193 | 76.572 |
| 1.2908 | 2.04 | 6000 | 1.2734 | 32.3053 | 76.897 |
| 1.0814 | 3.06 | 9000 | 1.1348 | 34.3605 | 77.2082 |
| 0.9083 | 4.08 | 12000 | 1.0246 | 34.9139 | 76.7213 |
| 0.7507 | 5.1 | 15000 | 0.9336 | 36.2245 | 76.6036 |
| 0.6046 | 6.12 | 18000 | 0.8291 | 37.987 | 77.326 |
| 0.4838 | 7.14 | 21000 | 0.7496 | 38.7572 | 77.2366 |
| 0.3861 | 8.16 | 24000 | 0.6730 | 40.3566 | 77.49 |
| 0.3203 | 9.19 | 27000 | 0.6159 | 41.4839 | 77.4048 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
rossevine/Check_Model_2
|
rossevine
| 2023-08-31T06:18:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-31T04:50:41Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- common_voice
metrics:
- wer
model-index:
- name: Check_Model_2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice
type: common_voice
config: id
split: test
args: id
metrics:
- name: Wer
type: wer
value: 0.2728883087823979
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Check_Model_2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3499
- Wer: 0.2729
- Cer: 0.0673
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 3.8708 | 3.23 | 400 | 0.7345 | 0.7259 | 0.2034 |
| 0.4247 | 6.45 | 800 | 0.4128 | 0.4268 | 0.1102 |
| 0.2047 | 9.68 | 1200 | 0.3726 | 0.3795 | 0.0930 |
| 0.1422 | 12.9 | 1600 | 0.3690 | 0.3514 | 0.0884 |
| 0.1139 | 16.13 | 2000 | 0.3811 | 0.3160 | 0.0794 |
| 0.089 | 19.35 | 2400 | 0.3650 | 0.2895 | 0.0731 |
| 0.0709 | 22.58 | 2800 | 0.3629 | 0.2944 | 0.0727 |
| 0.0594 | 25.81 | 3200 | 0.3538 | 0.2779 | 0.0692 |
| 0.0478 | 29.03 | 3600 | 0.3499 | 0.2729 | 0.0673 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 1.18.3
- Tokenizers 0.13.3
|
sontn122/content
|
sontn122
| 2023-08-31T06:03:48Z | 153 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"multiple-choice",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2023-08-31T05:59:52Z |
---
license: mit
base_model: microsoft/deberta-v3-large
tags:
- generated_from_trainer
model-index:
- name: microsoft/deberta-v3-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# microsoft/deberta-v3-large
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6094
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.6123 | 1.0 | 3550 | 1.6094 |
| 1.6124 | 2.0 | 7100 | 1.6094 |
| 1.6106 | 3.0 | 10650 | 1.6094 |
| 1.6107 | 4.0 | 14200 | 1.6094 |
| 1.6104 | 5.0 | 17750 | 1.6094 |
| 1.6115 | 6.0 | 21300 | 1.6094 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Glavin001/coqar-questions-llama-2-7b-v0.1-GPTQ
|
Glavin001
| 2023-08-31T06:02:02Z | 4 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"en",
"dataset:Glavin001/generate-questions-v0.1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-27T01:20:27Z |
---
datasets:
- Glavin001/generate-questions-v0.1
language:
- en
library_name: transformers
---
|
beniben0/midjourney-falcon-7b
|
beniben0
| 2023-08-31T05:57:07Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-31T05:49:56Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
AdanLee/ppo-LunarLander-v2-CleanRL
|
AdanLee
| 2023-08-31T05:54:13Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-31T05:37:54Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -23.72 +/- 113.11
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 500000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'AdanLee/ppo-LunarLander-v2-CleanRL'
'batch_size': 512
'minibatch_size': 128}
```
|
mahimairaja/distilhubert-music-classifier-finetuned-gtzan
|
mahimairaja
| 2023-08-31T05:46:22Z | 146 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-31T02:56:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: CTC-based-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CTC-based-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7057
- Accuracy: 0.79
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0608 | 1.0 | 57 | 2.0361 | 0.43 |
| 1.663 | 2.0 | 114 | 1.5387 | 0.62 |
| 1.2399 | 3.0 | 171 | 1.2074 | 0.68 |
| 1.0662 | 4.0 | 228 | 1.0805 | 0.65 |
| 0.7986 | 5.0 | 285 | 0.8880 | 0.75 |
| 0.7328 | 6.0 | 342 | 0.8037 | 0.74 |
| 0.5891 | 7.0 | 399 | 0.7918 | 0.78 |
| 0.5227 | 8.0 | 456 | 0.7232 | 0.79 |
| 0.5123 | 9.0 | 513 | 0.7138 | 0.78 |
| 0.5578 | 10.0 | 570 | 0.7057 | 0.79 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
GCYY/speecht5_finetuned_fleurs_zh
|
GCYY
| 2023-08-31T05:39:23Z | 82 | 1 |
transformers
|
[
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"audio",
"text-to-speech",
"dataset:fleurs",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-08-31T05:20:18Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
- audio
- text-to-speech
datasets:
- fleurs
model-index:
- name: speecht5_finetuned_fleurs_zh
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_fleurs_zh
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6864 | 1.09 | 100 | 0.6009 |
| 0.5976 | 2.19 | 200 | 0.5062 |
| 0.543 | 3.28 | 300 | 0.4577 |
| 0.4786 | 4.38 | 400 | 0.4343 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Bazaar/cv_canal_pollution_level
|
Bazaar
| 2023-08-31T05:34:23Z | 183 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-31T03:17:43Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: cv_canal_pollution_level
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9027777910232544
---
# cv_canal_pollution_level
使用HuggingPics微调生成
任务:河道污染等级分类(无污染、轻度污染、中度污染、重度污染)
使用方法:
```python
from transformers import pipeline
classifier = pipeline('image-classification', model='Bazzar/cv_canal_pollution_level')
print(classifier('http://图片地址'))
```
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### no pollution

#### light pollution

#### moderate pollution

#### heavy pollution

|
redstonehero/realcartoonpixar_v2
|
redstonehero
| 2023-08-31T05:31:46Z | 17 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-31T05:03:05Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
redstonehero/realcartoonrealistic_v6
|
redstonehero
| 2023-08-31T05:31:41Z | 20 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-31T05:03:28Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
dkqjrm/20230831092825
|
dkqjrm
| 2023-08-31T05:29:59Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T00:28:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230831092825'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230831092825
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5298
- Accuracy: 0.6771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 340 | 0.4989 | 0.5 |
| 0.5076 | 2.0 | 680 | 0.4922 | 0.5 |
| 0.5029 | 3.0 | 1020 | 0.4980 | 0.5 |
| 0.5029 | 4.0 | 1360 | 0.4881 | 0.5125 |
| 0.4992 | 5.0 | 1700 | 0.5067 | 0.5 |
| 0.4818 | 6.0 | 2040 | 0.4919 | 0.5251 |
| 0.4818 | 7.0 | 2380 | 0.5045 | 0.5392 |
| 0.4719 | 8.0 | 2720 | 0.4695 | 0.5 |
| 0.4636 | 9.0 | 3060 | 0.4805 | 0.5 |
| 0.4636 | 10.0 | 3400 | 0.5002 | 0.5 |
| 0.4501 | 11.0 | 3740 | 0.5665 | 0.6646 |
| 0.4418 | 12.0 | 4080 | 0.5283 | 0.6897 |
| 0.4418 | 13.0 | 4420 | 0.4705 | 0.5 |
| 0.4352 | 14.0 | 4760 | 0.5644 | 0.6630 |
| 0.4302 | 15.0 | 5100 | 0.5080 | 0.6505 |
| 0.4302 | 16.0 | 5440 | 0.5084 | 0.6897 |
| 0.4305 | 17.0 | 5780 | 0.5006 | 0.6599 |
| 0.4203 | 18.0 | 6120 | 0.5246 | 0.6928 |
| 0.4203 | 19.0 | 6460 | 0.4958 | 0.6583 |
| 0.4166 | 20.0 | 6800 | 0.5595 | 0.6630 |
| 0.4117 | 21.0 | 7140 | 0.4796 | 0.5 |
| 0.4117 | 22.0 | 7480 | 0.4820 | 0.5 |
| 0.4131 | 23.0 | 7820 | 0.5158 | 0.6755 |
| 0.406 | 24.0 | 8160 | 0.4801 | 0.5 |
| 0.4062 | 25.0 | 8500 | 0.5471 | 0.6646 |
| 0.4062 | 26.0 | 8840 | 0.4904 | 0.5 |
| 0.4021 | 27.0 | 9180 | 0.4880 | 0.5 |
| 0.3971 | 28.0 | 9520 | 0.5019 | 0.6646 |
| 0.3971 | 29.0 | 9860 | 0.4825 | 0.5 |
| 0.3936 | 30.0 | 10200 | 0.5069 | 0.6693 |
| 0.3907 | 31.0 | 10540 | 0.5472 | 0.6693 |
| 0.3907 | 32.0 | 10880 | 0.4886 | 0.5 |
| 0.3906 | 33.0 | 11220 | 0.5531 | 0.6693 |
| 0.3888 | 34.0 | 11560 | 0.5023 | 0.5266 |
| 0.3888 | 35.0 | 11900 | 0.4896 | 0.5 |
| 0.387 | 36.0 | 12240 | 0.4985 | 0.5 |
| 0.3836 | 37.0 | 12580 | 0.5309 | 0.6834 |
| 0.3836 | 38.0 | 12920 | 0.5402 | 0.6818 |
| 0.3792 | 39.0 | 13260 | 0.4854 | 0.5 |
| 0.3789 | 40.0 | 13600 | 0.4971 | 0.5 |
| 0.3789 | 41.0 | 13940 | 0.5368 | 0.6803 |
| 0.3775 | 42.0 | 14280 | 0.4958 | 0.5047 |
| 0.3753 | 43.0 | 14620 | 0.5139 | 0.6897 |
| 0.3753 | 44.0 | 14960 | 0.5224 | 0.6834 |
| 0.3795 | 45.0 | 15300 | 0.5119 | 0.6865 |
| 0.3743 | 46.0 | 15640 | 0.5120 | 0.6740 |
| 0.3743 | 47.0 | 15980 | 0.5049 | 0.5204 |
| 0.3726 | 48.0 | 16320 | 0.5026 | 0.5 |
| 0.3683 | 49.0 | 16660 | 0.5137 | 0.6646 |
| 0.3707 | 50.0 | 17000 | 0.5088 | 0.6129 |
| 0.3707 | 51.0 | 17340 | 0.5608 | 0.6646 |
| 0.3654 | 52.0 | 17680 | 0.5217 | 0.6803 |
| 0.3684 | 53.0 | 18020 | 0.5236 | 0.6740 |
| 0.3684 | 54.0 | 18360 | 0.5135 | 0.5016 |
| 0.3663 | 55.0 | 18700 | 0.5192 | 0.6818 |
| 0.3669 | 56.0 | 19040 | 0.5212 | 0.6160 |
| 0.3669 | 57.0 | 19380 | 0.5320 | 0.6740 |
| 0.3641 | 58.0 | 19720 | 0.5344 | 0.6646 |
| 0.3628 | 59.0 | 20060 | 0.4991 | 0.5 |
| 0.3628 | 60.0 | 20400 | 0.5341 | 0.6661 |
| 0.3612 | 61.0 | 20740 | 0.5039 | 0.5 |
| 0.3608 | 62.0 | 21080 | 0.5267 | 0.6379 |
| 0.3608 | 63.0 | 21420 | 0.5249 | 0.6364 |
| 0.3599 | 64.0 | 21760 | 0.5226 | 0.6599 |
| 0.3616 | 65.0 | 22100 | 0.5370 | 0.6834 |
| 0.3616 | 66.0 | 22440 | 0.5109 | 0.5 |
| 0.3543 | 67.0 | 22780 | 0.5368 | 0.6740 |
| 0.3616 | 68.0 | 23120 | 0.5236 | 0.5690 |
| 0.3616 | 69.0 | 23460 | 0.5300 | 0.6693 |
| 0.3578 | 70.0 | 23800 | 0.5441 | 0.6583 |
| 0.3541 | 71.0 | 24140 | 0.5310 | 0.6724 |
| 0.3541 | 72.0 | 24480 | 0.5346 | 0.6693 |
| 0.354 | 73.0 | 24820 | 0.5338 | 0.6630 |
| 0.355 | 74.0 | 25160 | 0.5279 | 0.6599 |
| 0.3536 | 75.0 | 25500 | 0.5280 | 0.6552 |
| 0.3536 | 76.0 | 25840 | 0.5328 | 0.6693 |
| 0.3539 | 77.0 | 26180 | 0.5231 | 0.5376 |
| 0.3527 | 78.0 | 26520 | 0.5282 | 0.6646 |
| 0.3527 | 79.0 | 26860 | 0.5250 | 0.6364 |
| 0.3535 | 80.0 | 27200 | 0.5298 | 0.6771 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aaniket/aniket
|
aaniket
| 2023-08-31T05:28:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:timit_asr",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-30T17:29:32Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: aniket
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aniket
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the timit_asr dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4216
- eval_wer: 1.0
- eval_runtime: 48.4566
- eval_samples_per_second: 33.576
- eval_steps_per_second: 4.21
- epoch: 19.86
- step: 2800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dhmeltzer/llama-7b-SFT-qlora-wiki_DPO_ds_RM_top_2_1024_r_64_alpha_16
|
dhmeltzer
| 2023-08-31T05:21:39Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:dhmeltzer/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged",
"base_model:finetune:dhmeltzer/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged",
"region:us"
] | null | 2023-08-31T03:53:40Z |
---
base_model: dhmeltzer/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged
tags:
- generated_from_trainer
model-index:
- name: llama-7b-SFT-qlora-wiki_DPO_ds_RM_top_2_1024_r_64_alpha_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-7b-SFT-qlora-wiki_DPO_ds_RM_top_2_1024_r_64_alpha_16
This model is a fine-tuned version of [dhmeltzer/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged](https://huggingface.co/dhmeltzer/llama-7b-SFT_ds_wiki65k_1024_r_64_alpha_16_merged) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6572
- Rewards/chosen: -0.1473
- Rewards/rejected: -0.2755
- Rewards/accuracies: 0.6128
- Rewards/margins: 0.1282
- Logps/rejected: -203.3539
- Logps/chosen: -207.2538
- Logits/rejected: 1.1534
- Logits/chosen: 1.1690
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6925 | 0.1 | 19 | 0.6761 | -0.1021 | -0.1593 | 0.5697 | 0.0573 | -202.1919 | -206.8013 | 1.1506 | 1.1664 |
| 0.6754 | 0.21 | 38 | 0.6738 | -0.4156 | -0.5460 | 0.5701 | 0.1303 | -206.0580 | -209.9368 | 1.1257 | 1.1406 |
| 0.6799 | 0.31 | 57 | 0.6666 | -0.0458 | -0.1454 | 0.5932 | 0.0996 | -202.0523 | -206.2388 | 1.1176 | 1.1327 |
| 0.6618 | 0.42 | 76 | 0.6637 | -0.1458 | -0.2745 | 0.5971 | 0.1286 | -203.3434 | -207.2391 | 1.1195 | 1.1333 |
| 0.6706 | 0.52 | 95 | 0.6607 | -0.0386 | -0.1827 | 0.5971 | 0.1440 | -202.4252 | -206.1670 | 1.1334 | 1.1484 |
| 0.668 | 0.63 | 114 | 0.6596 | -0.1615 | -0.2945 | 0.6035 | 0.1330 | -203.5434 | -207.3955 | 1.1500 | 1.1661 |
| 0.6712 | 0.73 | 133 | 0.6597 | -0.1703 | -0.2905 | 0.5979 | 0.1202 | -203.5037 | -207.4840 | 1.1515 | 1.1672 |
| 0.6715 | 0.84 | 152 | 0.6588 | -0.1516 | -0.2745 | 0.6100 | 0.1229 | -203.3436 | -207.2964 | 1.1532 | 1.1691 |
| 0.673 | 0.94 | 171 | 0.6572 | -0.1473 | -0.2755 | 0.6128 | 0.1282 | -203.3539 | -207.2538 | 1.1534 | 1.1690 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Saksham1234/helloitsme
|
Saksham1234
| 2023-08-31T05:05:59Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-08-31T05:05:59Z |
---
license: bigscience-openrail-m
---
|
54data/llama_2_ko_7b_wiki_QA
|
54data
| 2023-08-31T05:00:25Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:beomi/llama-2-ko-7b",
"base_model:finetune:beomi/llama-2-ko-7b",
"region:us"
] | null | 2023-08-23T15:46:46Z |
---
base_model: beomi/llama-2-ko-7b
tags:
- generated_from_trainer
model-index:
- name: llama_2_ko_7b_wiki_QA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama_2_ko_7b_wiki_QA
This model is a fine-tuned version of [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8568 | 0.33 | 50 | 1.4294 |
| 1.2307 | 0.67 | 100 | 1.2169 |
| 1.1788 | 1.0 | 150 | 1.1865 |
| 1.0837 | 1.33 | 200 | 1.1810 |
| 1.1905 | 1.67 | 250 | 1.1740 |
| 1.161 | 2.0 | 300 | 1.1703 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.10.1
- Tokenizers 0.13.3
|
Yaxin1992/codellama-13b-multi-1800
|
Yaxin1992
| 2023-08-31T04:45:38Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:codellama/CodeLlama-13b-hf",
"base_model:finetune:codellama/CodeLlama-13b-hf",
"license:llama2",
"region:us"
] | null | 2023-08-31T01:41:52Z |
---
license: llama2
base_model: codellama/CodeLlama-13b-hf
tags:
- generated_from_trainer
model-index:
- name: codellama-13b-multi-1800
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codellama-13b-multi-1800
This model is a fine-tuned version of [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1800
### Training results
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
soohmatthew/reddit-confidence-setfit-model-1
|
soohmatthew
| 2023-08-31T04:42:34Z | 7 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-28T12:20:56Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# soohmatthew/reddit-confidence-setfit-model-1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("soohmatthew/reddit-confidence-setfit-model-1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
whywynn/poca-SoccerTwos
|
whywynn
| 2023-08-31T04:29:39Z | 37 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-08-31T04:19:21Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: whywynn/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
soohmatthew/reddit-care-setfit-model-1
|
soohmatthew
| 2023-08-31T04:27:13Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-28T11:56:12Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# soohmatthew/reddit-care-setfit-model-1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("soohmatthew/reddit-care-setfit-model-1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
spsither/whisper-small-hi
|
spsither
| 2023-08-31T04:16:43Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-16T06:40:04Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-hi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-hi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3488
- Wer: 18.8573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1899 | 3.02 | 1000 | 0.3311 | 22.3674 |
| 0.0179 | 6.04 | 2000 | 0.3252 | 19.8309 |
| 0.0026 | 9.06 | 3000 | 0.3382 | 18.8189 |
| 0.0009 | 12.08 | 4000 | 0.3488 | 18.8573 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
rossevine/Check_Model_1
|
rossevine
| 2023-08-31T03:53:52Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"base_model:facebook/wav2vec2-large",
"base_model:finetune:facebook/wav2vec2-large",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-30T20:34:44Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-large
tags:
- generated_from_trainer
datasets:
- common_voice
metrics:
- wer
model-index:
- name: Check_Model_1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice
type: common_voice
config: id
split: test
args: id
metrics:
- name: Wer
type: wer
value: 0.37479022934924483
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Check_Model_1
This model is a fine-tuned version of [facebook/wav2vec2-large](https://huggingface.co/facebook/wav2vec2-large) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5522
- Wer: 0.3748
- Cer: 0.1158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 2.1839 | 3.23 | 400 | 0.8796 | 0.7306 | 0.2332 |
| 0.6388 | 6.45 | 800 | 0.8702 | 0.6410 | 0.2200 |
| 0.4695 | 9.68 | 1200 | 0.7064 | 0.5360 | 0.1632 |
| 0.3659 | 12.9 | 1600 | 0.5814 | 0.5211 | 0.1662 |
| 0.285 | 16.13 | 2000 | 0.6394 | 0.5041 | 0.1663 |
| 0.2254 | 19.35 | 2400 | 0.5889 | 0.4428 | 0.1405 |
| 0.1801 | 22.58 | 2800 | 0.5712 | 0.4013 | 0.1182 |
| 0.1392 | 25.81 | 3200 | 0.5914 | 0.3934 | 0.1177 |
| 0.1051 | 29.03 | 3600 | 0.5522 | 0.3748 | 0.1158 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 1.18.3
- Tokenizers 0.13.3
|
MohanaSri/Taxi
|
MohanaSri
| 2023-08-31T03:48:45Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-31T03:48:43Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="MohanaSri/Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Serotina/Reinforce-PixelCopter
|
Serotina
| 2023-08-31T03:43:59Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-31T02:03:50Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 34.00 +/- 29.07
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
beatwade/alpaca-bitcoin-tweets-sentiment
|
beatwade
| 2023-08-31T03:23:57Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-30T23:29:05Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
bwarshaw/dqn-SpaceInvadersNoFrameskip-v4
|
bwarshaw
| 2023-08-31T03:12:33Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-31T03:11:59Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 577.50 +/- 126.40
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga bwarshaw -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga bwarshaw -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga bwarshaw
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
pensuke/distilbert-base-uncased-finetuned-emotion
|
pensuke
| 2023-08-31T03:10:05Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T02:32:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9325
- name: F1
type: f1
value: 0.9324937609411934
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1879
- Accuracy: 0.9325
- F1: 0.9325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4344 | 1.0 | 250 | 0.2242 | 0.9185 | 0.9176 |
| 0.1857 | 2.0 | 500 | 0.1879 | 0.9325 | 0.9325 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
batman555/layer_1_classifier
|
batman555
| 2023-08-31T03:09:03Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-31T02:46:49Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: layer_1_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layer_1_classifier
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1867
- Accuracy: 0.9457
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 4 | 0.1221 | 1.0 |
| No log | 2.0 | 8 | 0.0832 | 1.0 |
| No log | 3.0 | 12 | 0.0647 | 1.0 |
| No log | 4.0 | 16 | 0.0591 | 1.0 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
TigerResearch/tigerbot-7b-chat-8bit
|
TigerResearch
| 2023-08-31T02:55:52Z | 4 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-30T10:16:15Z |
---
license: apache-2.0
---
<div style="width: 100%;">
<img src="http://x-pai.algolet.com/bot/img/logo_core.png" alt="TigerBot" style="width: 20%; display: block; margin: auto;">
</div>
<p align="center">
<font face="黑体" size=5"> A cutting-edge foundation for your very own LLM. </font>
</p>
<p align="center">
🌐 <a href="https://tigerbot.com/" target="_blank">TigerBot</a> • 🤗 <a href="https://huggingface.co/TigerResearch" target="_blank">Hugging Face</a>
</p>
This is a 8-bit GPTQ version of the [Tigerbot 13b chat](https://huggingface.co/TigerResearch/tigerbot-7b-chat).
It was quantized to 8bit using: https://github.com/PanQiWei/AutoGPTQ
## How to download and use this model in github: https://github.com/TigerResearch/TigerBot
Here are commands to clone the TigerBot and install.
```
conda create --name tigerbot python=3.8
conda activate tigerbot
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
git clone https://github.com/TigerResearch/TigerBot
cd TigerBot
pip install -r requirements.txt
```
Inference with command line interface
```
# 安装auto-gptq
pip install auto-gptq
# 启动推理
CUDA_VISIBLE_DEVICES=0 python other_infer/gptq_infer.py --model_path TigerResearch/tigerbot-7b-chat-8bit
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.