modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-30 12:27:52
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 528
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-30 12:27:19
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Marco-Cheung/whisper-small-cantonese
|
Marco-Cheung
| 2023-08-09T16:07:58Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"zh",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-08T06:53:17Z |
---
language:
- zh
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Cantonese - Marco Cheung
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: zh-HK
split: test
args: zh-HK
metrics:
- name: Wer
type: wer
value: 57.700752823086574
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Cantonese - Marco Cheung
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2487
- Wer Ortho: 57.8423
- Wer: 57.7008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 10
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1621 | 1.14 | 1000 | 0.2587 | 61.0824 | 65.0094 |
| 0.0767 | 2.28 | 2000 | 0.2487 | 57.8423 | 57.7008 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.3
- Tokenizers 0.13.3
|
fernandals/sentiment_v1
|
fernandals
| 2023-08-09T16:06:59Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-09T14:10:18Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sentiment_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4863
- Accuracy: 0.8312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5858 | 1.0 | 3410 | 0.5747 | 0.7928 |
| 0.4237 | 2.0 | 6820 | 0.4863 | 0.8312 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
tomaarsen/span-marker-bert-base-ncbi-disease
|
tomaarsen
| 2023-08-09T16:04:52Z | 18 | 6 |
span-marker
|
[
"span-marker",
"pytorch",
"tensorboard",
"safetensors",
"token-classification",
"ner",
"named-entity-recognition",
"en",
"dataset:ncbi_disease",
"license:apache-2.0",
"model-index",
"region:us"
] |
token-classification
| 2023-08-09T13:55:13Z |
---
license: apache-2.0
library_name: span-marker
tags:
- span-marker
- token-classification
- ner
- named-entity-recognition
pipeline_tag: token-classification
widget:
- text: "X-Linked adrenoleukodystrophy (ALD) is a genetic disease associated with demyelination of the central nervous system, adrenal insufficiency, and accumulation of very long chain fatty acids in tissue and body fluids."
example_title: "Example 1"
- text: "Canavan disease is inherited as an autosomal recessive trait that is caused by the deficiency of aspartoacylase (ASPA)."
example_title: "Example 2"
- text: "However, both models lack other frequent DM symptoms including the fibre-type dependent atrophy, myotonia, cataract and male-infertility."
example_title: "Example 3"
model-index:
- name: SpanMarker w. bert-base-cased on NCBI Disease by Tom Aarsen
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
type: ncbi_disease
name: NCBI Disease
split: test
revision: acd0e6451198d5b615c12356ab6a05fff4610920
metrics:
- type: f1
value: 0.8813
name: F1
- type: precision
value: 0.8661
name: Precision
- type: recall
value: 0.8971
name: Recall
datasets:
- ncbi_disease
language:
- en
metrics:
- f1
- recall
- precision
---
# SpanMarker for Disease Named Entity Recognition
This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model trained on the [ncbi_disease](https://huggingface.co/datasets/ncbi_disease) dataset. In particular, this SpanMarker model uses [bert-base-cased](https://huggingface.co/bert-base-cased) as the underlying encoder. See [train.py](train.py) for the training script.
## Metrics
This model achieves the following results on the testing set:
- Overall Precision: 0.8661
- Overall Recall: 0.8971
- Overall F1: 0.8813
- Overall Accuracy: 0.9837
## Labels
| **Label** | **Examples** |
|-----------|--------------|
| DISEASE | "ataxia-telangiectasia", "T-cell leukaemia", "C5D", "neutrophilic leukocytosis", "pyogenic infection" |
## Usage
To use this model for inference, first install the `span_marker` library:
```bash
pip install span_marker
```
You can then run inference with this model like so:
```python
from span_marker import SpanMarkerModel
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-bert-base-ncbi-disease")
# Run inference
entities = model.predict("Canavan disease is inherited as an autosomal recessive trait that is caused by the deficiency of aspartoacylase (ASPA).")
```
See the [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) repository for documentation and additional information on this library.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.0038 | 1.41 | 300 | 0.0059 | 0.8141 | 0.8579 | 0.8354 | 0.9818 |
| 0.0018 | 2.82 | 600 | 0.0054 | 0.8315 | 0.8720 | 0.8513 | 0.9840 |
### Framework versions
- SpanMarker 1.2.4
- Transformers 4.31.0
- Pytorch 1.13.1+cu117
- Datasets 2.14.3
- Tokenizers 0.13.2
|
tamiti1610001/bert-finetuned-ner
|
tamiti1610001
| 2023-08-09T16:02:50Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-09T14:13:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9457247828991316
- name: Recall
type: recall
value: 0.9530461124200605
- name: F1
type: f1
value: 0.949371332774518
- name: Accuracy
type: accuracy
value: 0.9913554768116506
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Precision: 0.9457
- Recall: 0.9530
- F1: 0.9494
- Accuracy: 0.9914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0136 | 1.0 | 878 | nan | 0.9401 | 0.9488 | 0.9445 | 0.9906 |
| 0.0063 | 2.0 | 1756 | nan | 0.9413 | 0.9507 | 0.9460 | 0.9907 |
| 0.0034 | 3.0 | 2634 | nan | 0.9457 | 0.9530 | 0.9494 | 0.9914 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e8_s6789_v3_l5_v20
|
KingKazma
| 2023-08-09T15:58:13Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T15:58:12Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e7_s6789_v3_l5_v50
|
KingKazma
| 2023-08-09T15:56:16Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T15:56:15Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
cyriac1/my-pet-dog
|
cyriac1
| 2023-08-09T15:54:39Z | 0 | 0 | null |
[
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-09T15:51:24Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by cyriac1 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: VJCET294
Sample pictures of this concept:
.jpg)
|
RogerB/marian-finetuned-Umuganda-Dataset-en-to-kin
|
RogerB
| 2023-08-09T15:53:16Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-rw",
"base_model:finetune:Helsinki-NLP/opus-mt-en-rw",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-08-08T18:52:54Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-rw
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-kin-Umuganda-Dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-Umuganda-Dataset-en-to-kin-Umuganda-Dataset
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-rw](https://huggingface.co/Helsinki-NLP/opus-mt-en-rw) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8769
- Bleu: 32.8345
## Model Description
The model has been fine-tuned to perform machine translation from English to Kinyarwanda.
## Intended Uses & Limitations
The primary intended use of this model is for research purposes.
## Training and Evaluation Data
The model has been fine-tuned using the [Digital Umuganda](https://huggingface.co/datasets/DigitalUmuganda/kinyarwanda-english-machine-translation-dataset/tree/main) dataset.
The dataset was split with 90% used for training and 10% for testing.
The data used to train the model were cased and digits removed.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e6_s6789_v3_l5_v50
|
KingKazma
| 2023-08-09T15:48:44Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T15:48:43Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
adon81/bert-finetuned-fishing-NER
|
adon81
| 2023-08-09T15:48:12Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:adon81/bert-finetuned-ner",
"base_model:finetune:adon81/bert-finetuned-ner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-09T13:13:46Z |
---
license: apache-2.0
base_model: adon81/bert-finetuned-ner
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-fishing-NER
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-fishing-NER
This model is a fine-tuned version of [adon81/bert-finetuned-ner](https://huggingface.co/adon81/bert-finetuned-ner) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 300000000000000000000000000000000
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Shafaet02/bert-fine-tuned-cola
|
Shafaet02
| 2023-08-09T15:48:02Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-09T08:59:17Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: Shafaet02/bert-fine-tuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Shafaet02/bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2831
- Validation Loss: 0.4311
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.4914 | 0.4282 | 0 |
| 0.2831 | 0.4311 | 1 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.11.0
- Datasets 2.14.3
- Tokenizers 0.13.3
|
Francesco-A/bert-finetuned-ner
|
Francesco-A
| 2023-08-09T15:45:53Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-09T15:29:35Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9323631552836117
- name: Recall
type: recall
value: 0.9488387748232918
- name: F1
type: f1
value: 0.940528818083243
- name: Accuracy
type: accuracy
value: 0.9861217401542356
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0592
- Precision: 0.9324
- Recall: 0.9488
- F1: 0.9405
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0774 | 1.0 | 1756 | 0.0764 | 0.9146 | 0.9337 | 0.9241 | 0.9802 |
| 0.0394 | 2.0 | 3512 | 0.0554 | 0.9265 | 0.9483 | 0.9373 | 0.9860 |
| 0.0261 | 3.0 | 5268 | 0.0592 | 0.9324 | 0.9488 | 0.9405 | 0.9861 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e6_s6789_v3_l5_v20
|
KingKazma
| 2023-08-09T15:44:09Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T15:44:08Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
nomad-ai/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
|
nomad-ai
| 2023-08-09T15:40:35Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:MIT/ast-finetuned-audioset-10-10-0.4593",
"base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593",
"license:bsd-3-clause",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-09T14:52:27Z |
---
license: bsd-3-clause
base_model: MIT/ast-finetuned-audioset-10-10-0.4593
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.9
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5240
- Accuracy: 0.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6746 | 1.0 | 112 | 0.6682 | 0.79 |
| 0.4141 | 2.0 | 225 | 0.5245 | 0.85 |
| 0.2933 | 3.0 | 337 | 0.3968 | 0.87 |
| 0.0352 | 4.0 | 450 | 0.3729 | 0.9 |
| 0.0029 | 5.0 | 562 | 0.6066 | 0.88 |
| 0.0036 | 6.0 | 675 | 0.5297 | 0.89 |
| 0.0001 | 7.0 | 787 | 0.5816 | 0.89 |
| 0.0072 | 8.0 | 900 | 0.5307 | 0.9 |
| 0.0052 | 9.0 | 1012 | 0.5536 | 0.9 |
| 0.0001 | 10.0 | 1125 | 0.5478 | 0.9 |
| 0.0001 | 11.0 | 1237 | 0.5201 | 0.9 |
| 0.0001 | 12.0 | 1350 | 0.5263 | 0.9 |
| 0.0001 | 13.0 | 1462 | 0.5223 | 0.9 |
| 0.0 | 14.0 | 1575 | 0.5225 | 0.9 |
| 0.0001 | 14.93 | 1680 | 0.5240 | 0.9 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e5_s6789_v3_l5_v20
|
KingKazma
| 2023-08-09T15:37:07Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T15:37:06Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e4_s6789_v3_l5_v50
|
KingKazma
| 2023-08-09T15:33:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T15:33:39Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Ripo-2007/dreambooth_alfonso
|
Ripo-2007
| 2023-08-09T15:32:17Z | 4 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-08-09T13:35:48Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: alfonsoaraco
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Test enoder was not trained.
|
dkqjrm/20230809151609
|
dkqjrm
| 2023-08-09T15:30:37Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-09T06:16:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: '20230809151609'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230809151609
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e4_s6789_v3_l5_v20
|
KingKazma
| 2023-08-09T15:30:05Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T15:30:03Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
santiagotoso/ppo-LunarLander-v2
|
santiagotoso
| 2023-08-09T15:27:34Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T13:24:45Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 232.20 +/- 76.62
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
murodbek/uzroberta-panx-uz
|
murodbek
| 2023-08-09T15:27:23Z | 167 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-13T09:47:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: uzroberta-panx-uz
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uzroberta-panx-uz
This model is a fine-tuned version of [rifkat/uztext-3Gb-BPE-Roberta](https://huggingface.co/rifkat/uztext-3Gb-BPE-Roberta) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1626
- F1: 0.9175
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0515 | 1.0 | 150 | 0.1373 | 0.9141 |
| 0.0415 | 2.0 | 300 | 0.1268 | 0.9194 |
| 0.0101 | 3.0 | 450 | 0.1225 | 0.9416 |
| 0.0038 | 4.0 | 600 | 0.1426 | 0.9353 |
| 0.0004 | 5.0 | 750 | 0.1458 | 0.9320 |
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.12.1
|
Meohong/Dialect-Polyglot-12.8b-QLoRA
|
Meohong
| 2023-08-09T15:26:17Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T15:26:09Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e3_s6789_v3_l5_v50
|
KingKazma
| 2023-08-09T15:26:08Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T15:26:07Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
felixshier/asc-01-bert-finetuned
|
felixshier
| 2023-08-09T15:24:58Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-09T13:36:18Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: asc-01-bert-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# asc-01-bert-finetuned
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6295
- Validation Loss: 0.7210
- Train Precision: 0.38
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 60, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Epoch |
|:----------:|:---------------:|:---------------:|:-----:|
| 0.7161 | 0.7021 | 0.4118 | 0 |
| 0.6906 | 0.7071 | 0.4730 | 1 |
| 0.6443 | 0.7257 | 0.3333 | 2 |
| 0.6295 | 0.7210 | 0.38 | 3 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.13.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
felixshier/csc-01-bert-finetuned
|
felixshier
| 2023-08-09T15:24:52Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-09T13:35:35Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: csc-01-bert-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# csc-01-bert-finetuned
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4789
- Validation Loss: 0.7231
- Train Precision: 0.6429
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 70, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Epoch |
|:----------:|:---------------:|:---------------:|:-----:|
| 0.7100 | 0.7421 | 0.0 | 0 |
| 0.6764 | 0.6861 | 0.625 | 1 |
| 0.6311 | 0.6838 | 0.5862 | 2 |
| 0.5909 | 0.7072 | 0.6286 | 3 |
| 0.5413 | 0.7504 | 0.6667 | 4 |
| 0.4789 | 0.7231 | 0.6429 | 5 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.13.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e2_s6789_v3_l5_v50
|
KingKazma
| 2023-08-09T15:18:36Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T15:18:35Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
jordyvl/vit-base_rvl-cdip_r2_32
|
jordyvl
| 2023-08-09T15:18:05Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-08T08:10:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_rvl-cdip_r2_32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_rvl-cdip_r2_32
This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6372
- Accuracy: 0.8985
- Brier Loss: 0.1792
- Nll: 1.1736
- F1 Micro: 0.8985
- F1 Macro: 0.8987
- Ece: 0.0847
- Aurc: 0.0201
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| 0.1647 | 1.0 | 3334 | 0.4024 | 0.8887 | 0.1682 | 1.2086 | 0.8887 | 0.8891 | 0.0457 | 0.0178 |
| 0.1418 | 2.0 | 6668 | 0.4075 | 0.8941 | 0.1646 | 1.2066 | 0.8941 | 0.8942 | 0.0522 | 0.0177 |
| 0.0989 | 3.0 | 10002 | 0.4409 | 0.8932 | 0.1690 | 1.1966 | 0.8932 | 0.8932 | 0.0647 | 0.0175 |
| 0.0614 | 4.0 | 13336 | 0.4781 | 0.8944 | 0.1730 | 1.2083 | 0.8944 | 0.8951 | 0.0694 | 0.0181 |
| 0.0392 | 5.0 | 16670 | 0.5329 | 0.8959 | 0.1761 | 1.1777 | 0.8959 | 0.8958 | 0.0776 | 0.0187 |
| 0.0231 | 6.0 | 20004 | 0.5714 | 0.8957 | 0.1799 | 1.2083 | 0.8957 | 0.8958 | 0.0813 | 0.0198 |
| 0.0126 | 7.0 | 23338 | 0.6002 | 0.8966 | 0.1802 | 1.1732 | 0.8966 | 0.8972 | 0.0839 | 0.0197 |
| 0.0079 | 8.0 | 26672 | 0.6193 | 0.8984 | 0.1789 | 1.1849 | 0.8984 | 0.8985 | 0.0833 | 0.0200 |
| 0.0049 | 9.0 | 30006 | 0.6333 | 0.8976 | 0.1798 | 1.1906 | 0.8976 | 0.8978 | 0.0851 | 0.0205 |
| 0.0034 | 10.0 | 33340 | 0.6372 | 0.8985 | 0.1792 | 1.1736 | 0.8985 | 0.8987 | 0.0847 | 0.0201 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e2_s6789_v3_l5_v20
|
KingKazma
| 2023-08-09T15:16:00Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T15:15:59Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
imvladikon/alephbertgimmel_parashoot
|
imvladikon
| 2023-08-09T15:10:27Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"he",
"dataset:imvladikon/parashoot",
"base_model:imvladikon/alephbertgimmel-base-512",
"base_model:finetune:imvladikon/alephbertgimmel-base-512",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-02T07:44:16Z |
---
base_model: imvladikon/alephbertgimmel-base-512
tags:
- generated_from_trainer
datasets:
- imvladikon/parashoot
model-index:
- name: alephbertgimmel_parashoot
results: []
language:
- he
metrics:
- f1
- exact_match
pipeline_tag: question-answering
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# alephbertgimmel_parashoot
This model is a fine-tuned version of [imvladikon/alephbertgimmel-base-512](https://huggingface.co/imvladikon/alephbertgimmel-base-512) on the [imvladikon/parashoot](https://huggingface.co/datasets/imvladikon/parashoot) dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
```
***** predict metrics *****
predict_samples = 1102
test_exact_match = 27.7073
test_f1 = 51.787
test_runtime = 0:00:32.05
test_samples_per_second = 34.383
test_steps_per_second = 4.306
```
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
jcy204/heat_model2
|
jcy204
| 2023-08-09T15:09:14Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-09T15:02:53Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: jcy204/heat_model2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jcy204/heat_model2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2766
- Validation Loss: 0.5538
- Train Accuracy: 0.7981
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3540, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6303 | 0.5314 | 0.7876 | 0 |
| 0.4221 | 0.5178 | 0.7921 | 1 |
| 0.2766 | 0.5538 | 0.7981 | 2 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e1_s6789_v3_l5_v20
|
KingKazma
| 2023-08-09T15:08:58Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T15:08:56Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
jcy204/cold_model2
|
jcy204
| 2023-08-09T15:02:39Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-09T14:57:29Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: jcy204/cold_model2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jcy204/cold_model2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3582
- Validation Loss: 0.6678
- Train Accuracy: 0.7477
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1545, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.7779 | 0.6213 | 0.7392 | 0 |
| 0.5323 | 0.6326 | 0.7315 | 1 |
| 0.3582 | 0.6678 | 0.7477 | 2 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e0_s6789_v3_l5_v20
|
KingKazma
| 2023-08-09T15:01:55Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T15:01:55Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
leonard-pak/q-FrozenLake-v1-4x4-noSlippery
|
leonard-pak
| 2023-08-09T14:59:17Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T14:58:08Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="leonard-pak/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
LarryAIDraw/ToukaLora-15
|
LarryAIDraw
| 2023-08-09T14:58:48Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-09T14:39:49Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/125271/touka-kirishima-tokyo-ghoul-lora
|
LarryAIDraw/GirlsFrontlineAk12
|
LarryAIDraw
| 2023-08-09T14:58:21Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-09T14:39:04Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/76960/ak-12-quiet-azure-girls-frontline
|
gsaivinay/Llama-2-7b-Chat-GPTQ
|
gsaivinay
| 2023-08-09T14:57:09Z | 26 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-18T19:21:58Z |
---
language:
- en
license: other
inference: true
model_type: llama
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# Meta's Llama 2 7b Chat GPTQ
## * Duplicated from TheBloke *
These files are GPTQ model files for [Meta's Llama 2 7b Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
## Prompt template: Llama-2-Chat
```
System: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
User: {prompt}
Assistant:
```
## Provided files
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
| main | 4 | 128 | False | 3.90 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
| gptq-4bit-32g-actorder_True | 4 | 32 | True | 4.28 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
| gptq-4bit-64g-actorder_True | 4 | 64 | True | 4.02 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| gptq-4bit-128g-actorder_True | 4 | 128 | True | 3.90 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Llama-2-7b-Chat-GPTQ:gptq-4bit-32g-actorder_True`
- With Git, you can clone a branch with:
```
git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Llama-2-7b-Chat-GPTQ`
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Llama-2-7b-Chat-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Llama-2-7b-Chat-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done"
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Llama-2-7b-Chat-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
## How to use this GPTQ model from Python code
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
`GITHUB_ACTIONS=true pip install auto-gptq`
Then try the following example code:
```python
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
model_name_or_path = "TheBloke/Llama-2-7b-Chat-GPTQ"
model_basename = "gptq_model-4bit-128g"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
model_basename=model_basename
use_safetensors=True,
trust_remote_code=True,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
"""
To download from a specific branch, use the revision parameter, as in this example:
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
revision="gptq-4bit-32g-actorder_True",
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=True,
device="cuda:0",
quantize_config=None)
"""
prompt = "Tell me about AI"
prompt_template=f'''System: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
User: {prompt}
Assistant:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Compatibility
The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
# Original model card: Meta's Llama 2 7b Chat
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e8_s6789_v3_l5_r4
|
KingKazma
| 2023-08-09T14:43:42Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T14:43:41Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
kasiarun/bloom-560m-peft-1
|
kasiarun
| 2023-08-09T14:43:34Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T14:43:31Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
zjoe/RLCourseppo-Huggy
|
zjoe
| 2023-08-09T14:43:19Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-09T14:43:10Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: zjoe/RLCourseppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e8_s6789_v3_l5_r2
|
KingKazma
| 2023-08-09T14:43:15Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-04T16:42:32Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e7_s6789_v3_l5_r4
|
KingKazma
| 2023-08-09T14:36:45Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T14:36:44Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
bofenghuang/flan-t5-large-dialogsum-fr
|
bofenghuang
| 2023-08-09T14:34:43Z | 274 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-03-02T19:25:55Z |
---
license: apache-2.0
language: fr
library_name: transformers
thumbnail: null
tags:
- summarization
widget:
- text: "Pierre: J’ai oublié ma trousse. Tu peux me prêter un stylo.\nLucie: Tiens.\nPierre: Merci. Tu peux me donner une feuille de papier aussi ?\nLucie: Euh… oui. Tiens.\nPierre: Merci. Ça t’ennuie pas si je regarde avec toi ? J’ai oublié mon livre…\nLucie: Non, pas de problème.\nPierre: Pff. Je ne comprends rien. Tu pourras m’expliquer après le cours ?\nLucie: Oui, si tu veux… On ira au café.\nPierre: Oui… euh non, j’ai oublié mon porte-monnaie \nLucie: Bon allez ! ce n’est pas grave, je t’invite.\nPierre: Tu es trop gentille.\nLucie: Oui, c’est bien possible."
metrics:
- rouge
model-index:
- name: Fine-tuned FLAN-T5 large model for French dialogue summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fine-tuned FLAN-T5 large model for French Dialogue Summarization
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) for French dialogue summarization.
## Usage
Inference with 🤗 Pipeline
```python
import torch
from transformers import pipeline
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
pipe = pipeline(
"summarization",
model="bofenghuang/flan-t5-large-dialogsum-fr",
device=device,
)
dialogue_text = """Pierre: J’ai oublié ma trousse. Tu peux me prêter un stylo.
Lucie: Tiens.
Pierre: Merci. Tu peux me donner une feuille de papier aussi ?
Lucie: Euh… oui. Tiens.
Pierre: Merci. Ça t’ennuie pas si je regarde avec toi ? J’ai oublié mon livre…
Lucie: Non, pas de problème.
Pierre: Pff. Je ne comprends rien. Tu pourras m’expliquer après le cours ?
Lucie: Oui, si tu veux… On ira au café.
Pierre: Oui… euh non, j’ai oublié mon porte-monnaie.
Lucie: Bon allez ! ce n’est pas grave, je t’invite.
Pierre: Tu es trop gentille.
Lucie: Oui, c’est bien possible."""
summarized_text = pipe(dialogue_text, max_length=1024)[0]["summary_text"] # greedy
# summarized_text = pipe(dialogue_text, max_length=1024, num_beams=5)[0]["summary_text"] # beam search
```
|
dimonyara/Llama2-7b-lora-int4
|
dimonyara
| 2023-08-09T14:32:04Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T14:31:58Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
DiegoVSulz/capivarinha_portugues_7Blv2-4bit-128-GPTQ
|
DiegoVSulz
| 2023-08-09T14:31:20Z | 0 | 2 | null |
[
"text2text-generation",
"pt",
"dataset:Guilherme34/Cabrita-lora-ptbr",
"region:us"
] |
text2text-generation
| 2023-08-09T05:50:12Z |
---
datasets:
- Guilherme34/Cabrita-lora-ptbr
language:
- pt
pipeline_tag: text2text-generation
---
Modelo llama v2 7b, treinado em portugues via QLORA, bons resultados com a lingua. testado apenas em windos, cuda 1.2.1, imagino que pelo menos 4GB de ram na GPU é necessária devido a quantização 4bit.
|
dinesh44/gptdatabot
|
dinesh44
| 2023-08-09T14:28:46Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-08T10:52:48Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
tolga-ozturk/mGPT-nsp
|
tolga-ozturk
| 2023-08-09T14:28:01Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"nsp",
"next-sentence-prediction",
"gpt",
"en",
"de",
"dataset:wikipedia",
"arxiv:2307.07331",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2023-07-19T14:42:58Z |
---
language:
- en
- de
tags:
- nsp
- next-sentence-prediction
- gpt
datasets:
- wikipedia
metrics:
- accuracy
---
# mGPT-nsp
mGPT-nsp is fine-tuned for Next Sentence Prediction task on the [wikipedia dataset](https://huggingface.co/datasets/wikipedia) using [multilingual GPT](https://huggingface.co/THUMT/mGPT) model. It was introduced in this [paper](https://arxiv.org/abs/2307.07331) and first released on this page.
## Model description
mGPT-nsp is a Transformer-based model which was fine-tuned for Next Sentence Prediction task on 11000 English and 11000 German Wikipedia articles. We use the same tokenization and vocabulary as the [mT5 model](https://huggingface.co/google/mt5-base).
## Intended uses
- Apply Next Sentence Prediction tasks. (compare the results with BERT models since BERT natively supports this task)
- See how to fine-tune an mGPT2 model using our [code](https://github.com/slds-lmu/stereotypes-multi/tree/main)
- Check our [paper](https://arxiv.org/abs/2307.07331) to see its results
## How to use
You can use this model directly with a pipeline for next sentence prediction. Here is how to use this model in PyTorch:
### Necessary Initialization
```python
from transformers import MT5Tokenizer, GPT2Model
import torch
from huggingface_hub import hf_hub_download
class ModelNSP(torch.nn.Module):
def __init__(self, pretrained_model="THUMT/mGPT"):
super(ModelNSP, self).__init__()
self.core_model = GPT2Model.from_pretrained(pretrained_model)
self.nsp_head = torch.nn.Sequential(torch.nn.Linear(self.core_model.config.hidden_size, 300),
torch.nn.Linear(300, 300), torch.nn.Linear(300, 2))
def forward(self, input_ids, attention_mask=None):
return self.nsp_head(self.core_model(input_ids, attention_mask=attention_mask)[0].mean(dim=1)).softmax(dim=-1)
model = torch.nn.DataParallel(ModelNSP().eval())
model.load_state_dict(torch.load(hf_hub_download(repo_id="tolga-ozturk/mGPT-nsp", filename="model_weights.bin")))
tokenizer = MT5Tokenizer.from_pretrained("tolga-ozturk/mGPT-nsp")
```
### Inference
```python
batch_texts = [("In Italy, pizza is presented unsliced.", "The sky is blue."),
("In Italy, pizza is presented unsliced.", "However, it is served sliced in Turkey.")]
encoded_dict = tokenizer.batch_encode_plus(batch_text_or_text_pairs=batch_texts, truncation="longest_first",padding=True, return_tensors="pt", return_attention_mask=True, max_length=256)
print(torch.argmax(model(encoded_dict.input_ids, attention_mask=encoded_dict.attention_mask), dim=-1))
```
### Training Metrics
<img src="https://huggingface.co/tolga-ozturk/mgpt-nsp/resolve/main/metrics.png">
## BibTeX entry and citation info
```bibtex
@misc{title={How Different Is Stereotypical Bias Across Languages?},
author={Ibrahim Tolga Öztürk and Rostislav Nedelchev and Christian Heumann and Esteban Garces Arias and Marius Roger and Bernd Bischl and Matthias Aßenmacher},
year={2023},
eprint={2307.07331},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
The work is done with Ludwig-Maximilians-Universität Statistics group, don't forget to check out [their huggingface page](https://huggingface.co/misoda) for other interesting works!
|
tolga-ozturk/mt5-base-nsp
|
tolga-ozturk
| 2023-08-09T14:27:30Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"nsp",
"next-sentence-prediction",
"t5",
"en",
"de",
"fr",
"es",
"tr",
"dataset:wikipedia",
"arxiv:2307.07331",
"endpoints_compatible",
"region:us"
] | null | 2023-08-03T18:56:52Z |
---
language:
- en
- de
- fr
- es
- tr
tags:
- nsp
- next-sentence-prediction
- t5
- mt5
datasets:
- wikipedia
metrics:
- accuracy
---
# mT5-base-nsp
mT5-base-nsp is fine-tuned for Next Sentence Prediction task on the [wikipedia dataset](https://huggingface.co/datasets/wikipedia) using [google/mt5-base](https://huggingface.co/google/mt5-base) model. It was introduced in this [paper](https://arxiv.org/abs/2307.07331) and first released on this page.
## Model description
mT5-base-nsp is a Transformer-based model which was fine-tuned for Next Sentence Prediction task on 2500 English, 2500 German, 2500 Turkish, 2500 Spanish and 2500 French Wikipedia articles.
## Intended uses
- Apply Next Sentence Prediction tasks. (compare the results with BERT models since BERT natively supports this task)
- See how to fine-tune an mT5 model using our [code](https://github.com/slds-lmu/stereotypes-multi/tree/main)
- Check our [paper](https://arxiv.org/abs/2307.07331) to see its results
## How to use
You can use this model directly with a pipeline for next sentence prediction. Here is how to use this model in PyTorch:
### Necessary Initialization
```python
import torch
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
from huggingface_hub import hf_hub_download
class ModelNSP(torch.nn.Module):
def __init__(self, pretrained_model, tokenizer, nsp_dim=300):
super(ModelNSP, self).__init__()
self.zero_token, self.one_token = (self.find_label_encoding(x, tokenizer).item() for x in ["0", "1"])
self.core_model = MT5ForConditionalGeneration.from_pretrained(pretrained_model)
self.nsp_head = torch.nn.Sequential(torch.nn.Linear(self.core_model.config.hidden_size, nsp_dim),
torch.nn.Linear(nsp_dim, nsp_dim), torch.nn.Linear(nsp_dim, 2))
def forward(self, input_ids, attention_mask=None):
outputs = self.core_model.generate(input_ids=input_ids, attention_mask=attention_mask, max_length=3,
output_scores=True, return_dict_in_generate=True)
logits = [torch.Tensor([score[self.zero_token], score[self.one_token]]) for score in outputs.scores[1]]
return torch.stack(logits).softmax(dim=-1)
@staticmethod
def find_label_encoding(input_str, tokenizer):
encoded_str = tokenizer.encode(input_str, add_special_tokens=False, return_tensors="pt")
return (torch.index_select(encoded_str, 1, torch.tensor([1])) if encoded_str.size(dim=1) == 2 else encoded_str)
tokenizer = MT5Tokenizer.from_pretrained("tolga-ozturk/mT5-base-nsp")
model = torch.nn.DataParallel(ModelNSP("google/mt5-base", tokenizer).eval())
model.load_state_dict(torch.load(hf_hub_download(repo_id="tolga-ozturk/mT5-base-nsp", filename="model_weights.bin")))
```
### Inference
```python
batch_texts = [("In Italy, pizza is presented unsliced.", "The sky is blue."),
("In Italy, pizza is presented unsliced.", "However, it is served sliced in Turkey.")]
encoded_dict = tokenizer.batch_encode_plus(batch_text_or_text_pairs=batch_texts, truncation="longest_first", padding=True, return_tensors="pt", return_attention_mask=True, max_length=256)
print(torch.argmax(model(encoded_dict.input_ids, attention_mask=encoded_dict.attention_mask), dim=-1))
```
### Training Metrics
<img src="https://huggingface.co/tolga-ozturk/mt5-base-nsp/resolve/main/metrics.png">
## BibTeX entry and citation info
```bibtex
@misc{title={How Different Is Stereotypical Bias Across Languages?},
author={Ibrahim Tolga Öztürk and Rostislav Nedelchev and Christian Heumann and Esteban Garces Arias and Marius Roger and Bernd Bischl and Matthias Aßenmacher},
year={2023},
eprint={2307.07331},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
The work is done with Ludwig-Maximilians-Universität Statistics group, don't forget to check out [their huggingface page](https://huggingface.co/misoda) for other interesting works!
|
tolga-ozturk/t5-spanish-nsp
|
tolga-ozturk
| 2023-08-09T14:25:31Z | 3 | 0 |
transformers
|
[
"transformers",
"t5",
"text2text-generation",
"nsp",
"next-sentence-prediction",
"es",
"dataset:wikipedia",
"arxiv:2307.07331",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-09T13:45:31Z |
---
language:
- es
tags:
- nsp
- next-sentence-prediction
- t5
datasets:
- wikipedia
metrics:
- accuracy
---
# T5-spanish-nsp
T5-spanish-nsp is fine-tuned for Next Sentence Prediction task on the [wikipedia dataset](https://huggingface.co/datasets/wikipedia) using [flax-community/spanish-t5-small](https://huggingface.co/flax-community/spanish-t5-small) model. It was introduced in this [paper](https://arxiv.org/abs/2307.07331) and first released on this page.
## Model description
T5-spanish-nsp is a Transformer-based model which was fine-tuned for Next Sentence Prediction task on 20000 Spanish Wikipedia articles.
## Intended uses
- Apply Next Sentence Prediction tasks. (compare the results with BERT models since BERT natively supports this task)
- See how to fine-tune a T5 model using our [code](https://github.com/slds-lmu/stereotypes-multi/tree/main)
- Check our [paper](https://arxiv.org/abs/2307.07331) to see its results
## How to use
You can use this model directly with a pipeline for next sentence prediction. Here is how to use this model in PyTorch:
### Necessary Initialization
```python
import torch
from transformers import T5ForConditionalGeneration, AutoTokenizer
from huggingface_hub import hf_hub_download
class ModelNSP(torch.nn.Module):
def __init__(self, pretrained_model, tokenizer, nsp_dim=300):
super(ModelNSP, self).__init__()
self.zero_token, self.one_token = (self.find_label_encoding(x, tokenizer).item() for x in ["0", "1"])
self.core_model = T5ForConditionalGeneration.from_pretrained(pretrained_model)
self.nsp_head = torch.nn.Sequential(torch.nn.Linear(self.core_model.config.hidden_size, nsp_dim),
torch.nn.Linear(nsp_dim, nsp_dim), torch.nn.Linear(nsp_dim, 2))
def forward(self, input_ids, attention_mask=None):
outputs = self.core_model.generate(input_ids=input_ids, attention_mask=attention_mask, max_length=3,
output_scores=True, return_dict_in_generate=True)
logits = [torch.Tensor([score[self.zero_token], score[self.one_token]]) for score in outputs.scores[1]]
return torch.stack(logits).softmax(dim=-1)
@staticmethod
def find_label_encoding(input_str, tokenizer):
encoded_str = tokenizer.encode(input_str, add_special_tokens=False, return_tensors="pt")
return (torch.index_select(encoded_str, 1, torch.tensor([1])) if encoded_str.size(dim=1) == 2 else encoded_str)
tokenizer = AutoTokenizer.from_pretrained("tolga-ozturk/t5-french-nsp")
model = torch.nn.DataParallel(ModelNSP("plguillou/t5-base-fr-sum-cnndm", tokenizer).eval())
model.load_state_dict(torch.load(hf_hub_download(repo_id="tolga-ozturk/t5-french-nsp", filename="model_weights.bin")))
```
### Inference
```python
batch_texts = [("clasificación binaria: En Italia, la pizza se presenta sin rebanar.", "El cielo es azul."),
("clasificación binaria: En Italia, la pizza se presenta sin rebanar.", "Sin embargo, se sirve en rodajas en Turquía.")]
encoded_dict = tokenizer.batch_encode_plus(batch_text_or_text_pairs=batch_texts, truncation="longest_first", padding=True, return_tensors="pt", return_attention_mask=True, max_length=256)
print(torch.argmax(model(encoded_dict.input_ids, attention_mask=encoded_dict.attention_mask), dim=-1))
```
### Training Metrics
<img src="https://huggingface.co/tolga-ozturk/t5-spanish-nsp/resolve/main/metrics.png">
## BibTeX entry and citation info
```bibtex
@misc{title={How Different Is Stereotypical Bias Across Languages?},
author={Ibrahim Tolga Öztürk and Rostislav Nedelchev and Christian Heumann and Esteban Garces Arias and Marius Roger and Bernd Bischl and Matthias Aßenmacher},
year={2023},
eprint={2307.07331},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
The work is done with Ludwig-Maximilians-Universität Statistics group, don't forget to check out [their huggingface page](https://huggingface.co/misoda) for other interesting works!
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e5_s6789_v3_l5_r4
|
KingKazma
| 2023-08-09T14:22:53Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T14:22:51Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e4_s6789_v3_l5_r2
|
KingKazma
| 2023-08-09T14:15:16Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-04T16:13:33Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
feabries/ddpm-celebahq-finetuned-butterflies-2epochs
|
feabries
| 2023-08-09T14:11:58Z | 32 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-08-09T14:11:40Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('feabries/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e3_s6789_v3_l5_r4
|
KingKazma
| 2023-08-09T14:09:00Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T14:08:59Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Against61/llama2-qlora-finetunined-CHT
|
Against61
| 2023-08-09T14:06:35Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T14:06:18Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e2_s6789_v3_l5_r2
|
KingKazma
| 2023-08-09T14:01:16Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-04T15:59:05Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
clibrain/Llama-2-ft-instruct-es
|
clibrain
| 2023-08-09T13:56:42Z | 1,483 | 18 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"es",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-21T08:40:47Z |
---
license: apache-2.0
language:
- es
pipeline_tag: text-generation
library_name: transformers
inference: false
---
# Llama-2-ft-instruct-es
# ⚠️ Please go to [clibrain/Llama-2-7b-ft-instruct-es](https://huggingface.co/clibrain/Llama-2-7b-ft-instruct-es) for the fixed and updated version.
[Llama 2 (7B)](https://huggingface.co/meta-llama/Llama-2-7b) fine-tuned on [Clibrain](https://huggingface.co/clibrain)'s Spanish instructions dataset.
## Model Details
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model. Links to other models can be found in the index at the bottom.
## Example of Usage
```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoTokenizer, GenerationConfig
model_id = "clibrain/Llama-2-ft-instruct-es"
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True).to("cuda")
tokenizer = AutoTokenizer.from_pretrained(model_id)
def create_instruction(instruction, input_data=None, context=None):
sections = {
"Instrucción": instruction,
"Entrada": input_data,
"Contexto": context,
}
system_prompt = "A continuación hay una instrucción que describe una tarea, junto con una entrada que proporciona más contexto. Escriba una respuesta que complete adecuadamente la solicitud.\n\n"
prompt = system_prompt
for title, content in sections.items():
if content is not None:
prompt += f"### {title}:\n{content}\n\n"
prompt += "### Respuesta:\n"
return prompt
def generate(
instruction,
input=None,
context=None,
max_new_tokens=128,
temperature=0.1,
top_p=0.75,
top_k=40,
num_beams=4,
**kwargs
):
prompt = create_instruction(instruction, input, context)
print(prompt.replace("### Respuesta:\n", ""))
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to("cuda")
attention_mask = inputs["attention_mask"].to("cuda")
generation_config = GenerationConfig(
temperature=temperature,
top_p=top_p,
top_k=top_k,
num_beams=num_beams,
**kwargs,
)
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=max_new_tokens,
early_stopping=True
)
s = generation_output.sequences[0]
output = tokenizer.decode(s)
return output.split("### Respuesta:")[1].lstrip("\n")
instruction = "Dame una lista de lugares a visitar en España."
print(generate(instruction))
```
|
arminhaberl/faster-whisper-base
|
arminhaberl
| 2023-08-09T13:56:36Z | 9 | 1 |
ctranslate2
|
[
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2023-08-09T13:56:01Z |
---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper base model for CTranslate2
This repository contains the conversion of [openai/whisper-base](https://huggingface.co/openai/whisper-base) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("base")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-base --output_dir faster-whisper-base \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-base).**
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e1_s6789_v3_l5_r4
|
KingKazma
| 2023-08-09T13:55:07Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T13:55:06Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
arminhaberl/faster-whisper-large-v1
|
arminhaberl
| 2023-08-09T13:55:02Z | 11 | 0 |
ctranslate2
|
[
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2023-08-09T13:53:41Z |
---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper large-v1 model for CTranslate2
This repository contains the conversion of [openai/whisper-large](https://huggingface.co/openai/whisper-large) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("large-v1")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-large --output_dir faster-whisper-large-v1 \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-large).**
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e1_s6789_v3_l5_r2
|
KingKazma
| 2023-08-09T13:54:17Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-04T15:51:51Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
arminhaberl/faster-whisper-large-v2
|
arminhaberl
| 2023-08-09T13:53:04Z | 12 | 0 |
ctranslate2
|
[
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2023-08-09T13:51:56Z |
---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper large-v2 model for CTranslate2
This repository contains the conversion of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("large-v2")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-large-v2 --output_dir faster-whisper-large-v2 \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-large-v2).**
|
arminhaberl/faster-whisper-medium
|
arminhaberl
| 2023-08-09T13:51:15Z | 7 | 0 |
ctranslate2
|
[
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2023-08-09T13:50:36Z |
---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper medium model for CTranslate2
This repository contains the conversion of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("medium")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-medium --output_dir faster-whisper-medium \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-medium).**
|
Ilias7/ppo-LunarLander-v2
|
Ilias7
| 2023-08-09T13:47:59Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T13:47:36Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.64 +/- 21.10
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e0_s6789_v3_l5_r2
|
KingKazma
| 2023-08-09T13:47:17Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-04T15:44:36Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
tolga-ozturk/t5-french-nsp
|
tolga-ozturk
| 2023-08-09T13:37:49Z | 4 | 0 |
transformers
|
[
"transformers",
"t5",
"text2text-generation",
"nsp",
"next-sentence-prediction",
"fr",
"dataset:wikipedia",
"arxiv:2307.07331",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-09T13:11:36Z |
---
language:
- fr
tags:
- nsp
- next-sentence-prediction
- t5
datasets:
- wikipedia
metrics:
- accuracy
---
# T5-french-nsp
T5-french-nsp is fine-tuned for Next Sentence Prediction task on the [wikipedia dataset](https://huggingface.co/datasets/wikipedia) using [plguillou/t5-base-fr-sum-cnndm](https://huggingface.co/plguillou/t5-base-fr-sum-cnndm) model. It was introduced in this [paper](https://arxiv.org/abs/2307.07331) and first released on this page.
## Model description
T5-french-nsp is a Transformer-based model which was fine-tuned for Next Sentence Prediction task on 14000 French Wikipedia articles.
## Intended uses
- Apply Next Sentence Prediction tasks. (compare the results with BERT models since BERT natively supports this task)
- See how to fine-tune a T5 model using our [code](https://github.com/slds-lmu/stereotypes-multi/tree/main)
- Check our [paper](https://arxiv.org/abs/2307.07331) to see its results
## How to use
You can use this model directly with a pipeline for next sentence prediction. Here is how to use this model in PyTorch:
### Necessary Initialization
```python
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
from huggingface_hub import hf_hub_download
class ModelNSP(torch.nn.Module):
def __init__(self, pretrained_model, tokenizer, nsp_dim=300):
super(ModelNSP, self).__init__()
self.zero_token, self.one_token = (self.find_label_encoding(x, tokenizer).item() for x in ["0", "1"])
self.core_model = T5ForConditionalGeneration.from_pretrained(pretrained_model)
self.nsp_head = torch.nn.Sequential(torch.nn.Linear(self.core_model.config.hidden_size, nsp_dim),
torch.nn.Linear(nsp_dim, nsp_dim), torch.nn.Linear(nsp_dim, 2))
def forward(self, input_ids, attention_mask=None):
outputs = self.core_model.generate(input_ids=input_ids, attention_mask=attention_mask, max_length=3,
output_scores=True, return_dict_in_generate=True)
logits = [torch.Tensor([score[self.zero_token], score[self.one_token]]) for score in outputs.scores[1]]
return torch.stack(logits).softmax(dim=-1)
@staticmethod
def find_label_encoding(input_str, tokenizer):
encoded_str = tokenizer.encode(input_str, add_special_tokens=False, return_tensors="pt")
return (torch.index_select(encoded_str, 1, torch.tensor([1])) if encoded_str.size(dim=1) == 2 else encoded_str)
tokenizer = T5Tokenizer.from_pretrained("tolga-ozturk/t5-french-nsp")
model = torch.nn.DataParallel(ModelNSP("plguillou/t5-base-fr-sum-cnndm", tokenizer).eval())
model.load_state_dict(torch.load(hf_hub_download(repo_id="tolga-ozturk/t5-french-nsp", filename="model_weights.bin")))
```
### Inference
```python
batch_texts = [("classification binaire: En Italie, la pizza est présentée non tranchée.", "Le ciel est bleu."),
("classification binaire: En Italie, la pizza est présentée non tranchée.", "Cependant, il est servi en tranches en Turquie.")]
encoded_dict = tokenizer.batch_encode_plus(batch_text_or_text_pairs=batch_texts, truncation="longest_first", padding=True, return_tensors="pt", return_attention_mask=True, max_length=256)
print(torch.argmax(model(encoded_dict.input_ids, attention_mask=encoded_dict.attention_mask), dim=-1))
```
### Training Metrics
<img src="https://huggingface.co/tolga-ozturk/t5-french-nsp/resolve/main/metrics.png">
## BibTeX entry and citation info
```bibtex
@misc{title={How Different Is Stereotypical Bias Across Languages?},
author={Ibrahim Tolga Öztürk and Rostislav Nedelchev and Christian Heumann and Esteban Garces Arias and Marius Roger and Bernd Bischl and Matthias Aßenmacher},
year={2023},
eprint={2307.07331},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
The work is done with Ludwig-Maximilians-Universität Statistics group, don't forget to check out [their huggingface page](https://huggingface.co/misoda) for other interesting works!
|
ayeshagonzales/MBM_Model
|
ayeshagonzales
| 2023-08-09T13:32:42Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-08-09T13:12:48Z |
---
license: mit
---
Models for use with https://github.com/hincz-lab/motion-blur-microscopy repository.
For analyzing image data with either SRBC or both SRBC and CAR-T cells (i.e. SRBC on Laminin or CAR-T and SRBC on P-Selectin), use *Motion_Blur_Modern_Three.h5*
For analyzing image data with only CAR-T cells (i.e. CAR-T on E-selectin), use *Phase_One_Network_E_Selectin_Car_T.h5*
|
MemerOwO/Erkin_Koray
|
MemerOwO
| 2023-08-09T13:28:26Z | 0 | 0 | null |
[
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-08-09T10:39:14Z |
---
license: bigcode-openrail-m
---
|
manuu01/SoccerTwos
|
manuu01
| 2023-08-09T13:28:00Z | 577 | 1 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-07-30T22:51:19Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: manuu01/SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
polejowska/detr-r50-cd45rb-8ah-2l-corrected
|
polejowska
| 2023-08-09T13:26:25Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cd45rb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-08-09T05:36:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cd45rb
model-index:
- name: detr-r50-cd45rb-8ah-2l-corrected
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-r50-cd45rb-8ah-2l-corrected
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cd45rb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.7108 | 1.0 | 4606 | 2.7888 |
| 3.3322 | 2.0 | 9212 | 2.5539 |
| 3.2038 | 3.0 | 13818 | 2.4728 |
| 3.1338 | 4.0 | 18424 | 2.4153 |
| 3.0774 | 5.0 | 23030 | 2.4054 |
| 3.0301 | 6.0 | 27636 | 2.3471 |
| 2.9925 | 7.0 | 32242 | 2.3332 |
| 2.9639 | 8.0 | 36848 | 2.3221 |
| 2.944 | 9.0 | 41454 | 2.3080 |
| 2.9248 | 10.0 | 46060 | 2.2973 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Sivapriya2133/the-cat-csd
|
Sivapriya2133
| 2023-08-09T13:21:21Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-09T13:15:12Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### THE-CAT-CSD Dreambooth model trained by Sivapriya2133 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: CCIEK149
Sample pictures of this concept:
|
amit0814/wav2vec2-large-xls-r-300m-hi-spot-colab
|
amit0814
| 2023-08-09T13:08:37Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-09T12:42:45Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hi-spot-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi-spot-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 1.18.3
- Tokenizers 0.13.3
|
Phaaarus/QLoRA_replica_8rank_QKadap
|
Phaaarus
| 2023-08-09T12:51:49Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T12:48:23Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
arywidanthi/Heart-failure-prediction
|
arywidanthi
| 2023-08-09T12:36:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-09T12:36:02Z |
---
title: Heart Failure Gc3
emoji: 📊
colorFrom: purple
colorTo: blue
sdk: streamlit
sdk_version: 1.21.0
app_file: app.py
pinned: false
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
ruudra/trial-obj-det
|
ruudra
| 2023-08-09T12:31:38Z | 0 | 0 | null |
[
"object-detection",
"region:us"
] |
object-detection
| 2023-08-09T10:18:34Z |
---
pipeline_tag: object-detection
---
### How to use
Here is how to use this model:
python detect.py --weights best.pt --img 416 --conf 0.4 --source img.png
|
felixb85/reinforce-Pixelcopter-PLE-v0
|
felixb85
| 2023-08-09T12:27:19Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T11:48:53Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 25.70 +/- 19.35
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
EdJ1234/lora-peft-legal-summ
|
EdJ1234
| 2023-08-09T12:21:43Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T12:21:42Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
annaovesnaatatt/q-Taxi-v3
|
annaovesnaatatt
| 2023-08-09T12:08:48Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T12:08:44Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="annaovesna/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
heegyu/WizardVicuna-3B-0719
|
heegyu
| 2023-08-09T12:08:44Z | 3,684 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:heegyu/wizard_vicuna_70k_v2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-23T02:51:40Z |
---
license: apache-2.0
language:
- en
datasets:
- heegyu/wizard_vicuna_70k_v2
---
Base Model: [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b)
Usage
```
### Human:
your instruction
### ASSISANT:
output will be generated and ended with <|endoftext|>
```
|
annaovesnaatatt/q-FrozenLake-v1-4x4-noSlippery
|
annaovesnaatatt
| 2023-08-09T12:02:45Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T12:02:41Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="annaovesna/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
RIOLITE/products_matching_aumet_fine_tune_2023-08-09
|
RIOLITE
| 2023-08-09T12:02:29Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-08-09T07:54:31Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
RIOLITE/products_matching_aumet_scratch_2023-08-09
|
RIOLITE
| 2023-08-09T12:02:09Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-08-09T07:52:17Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
JFuellem/whisper-tiny-en-US
|
JFuellem
| 2023-08-09T12:01:19Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-09T11:29:15Z |
---
language:
- en
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: Whisper Tiny en-US - JFuellem
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: MInDS-14
type: PolyAI/minds14
config: en-US
split: train[450:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.3530106257378985
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny en-US - JFuellem
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the MInDS-14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6457
- Wer Ortho: 35.7187
- Wer: 0.3530
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0007 | 17.86 | 500 | 0.6457 | 35.7187 | 0.3530 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
hoang14/law_chatbot_1b7_2048_context_mixed_data
|
hoang14
| 2023-08-09T11:59:38Z | 0 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-08T00:48:33Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
Abubakar144/lsa_nlp_final
|
Abubakar144
| 2023-08-09T11:58:12Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-09T11:57:58Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: lsa_nlp_final
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# lsa_nlp_final
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
fuxj/ppo-LunarLander-v2
|
fuxj
| 2023-08-09T11:57:59Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T11:57:38Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.76 +/- 47.28
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nrshoudi/wav2vec2-large-xls-r-300m-Arabic-phoneme-based
|
nrshoudi
| 2023-08-09T11:54:51Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-12T23:11:40Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-Arabic-phoneme-based
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-Arabic-phoneme-based
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7493
- Per: 0.1979
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 30.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Per |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.9601 | 1.0 | 2187 | 1.7221 | 0.9190 |
| 1.307 | 2.0 | 4374 | 1.0964 | 0.4532 |
| 0.9363 | 3.0 | 6561 | 0.9163 | 0.3469 |
| 0.7942 | 4.0 | 8748 | 0.8432 | 0.3037 |
| 0.7 | 5.0 | 10935 | 0.7827 | 0.2881 |
| 0.6274 | 6.0 | 13122 | 0.7456 | 0.2713 |
| 0.5692 | 7.0 | 15309 | 0.6924 | 0.2572 |
| 0.5203 | 8.0 | 17496 | 0.6521 | 0.2491 |
| 0.4853 | 9.0 | 19683 | 0.6583 | 0.2420 |
| 0.4448 | 10.0 | 21870 | 0.6580 | 0.2312 |
| 0.4134 | 11.0 | 24057 | 0.6313 | 0.2380 |
| 0.389 | 12.0 | 26244 | 0.6099 | 0.2225 |
| 0.3644 | 13.0 | 28431 | 0.6238 | 0.2239 |
| 0.3432 | 14.0 | 30618 | 0.6369 | 0.2195 |
| 0.3191 | 15.0 | 32805 | 0.6391 | 0.2164 |
| 0.2992 | 16.0 | 34992 | 0.6314 | 0.2164 |
| 0.2827 | 17.0 | 37179 | 0.6385 | 0.2143 |
| 0.2666 | 18.0 | 39366 | 0.6330 | 0.2159 |
| 0.2479 | 19.0 | 41553 | 0.6653 | 0.2125 |
| 0.2341 | 20.0 | 43740 | 0.6692 | 0.2165 |
| 0.2209 | 21.0 | 45927 | 0.6656 | 0.2199 |
| 0.2075 | 22.0 | 48114 | 0.6669 | 0.2104 |
| 0.1955 | 23.0 | 50301 | 0.6830 | 0.2044 |
| 0.1825 | 24.0 | 52488 | 0.6973 | 0.2065 |
| 0.1758 | 25.0 | 54675 | 0.7265 | 0.2013 |
| 0.1644 | 26.0 | 56862 | 0.7416 | 0.2040 |
| 0.1571 | 27.0 | 59049 | 0.7202 | 0.2007 |
| 0.1489 | 28.0 | 61236 | 0.7224 | 0.2019 |
| 0.1432 | 29.0 | 63423 | 0.7357 | 0.1988 |
| 0.1373 | 30.0 | 65610 | 0.7493 | 0.1979 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 1.18.3
- Tokenizers 0.13.3
|
silpakanneganti/roberta-cpt-medical-ner
|
silpakanneganti
| 2023-08-09T11:45:58Z | 32 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:silpakanneganti/roberta-cpt-medical-ner",
"base_model:finetune:silpakanneganti/roberta-cpt-medical-ner",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-05T19:38:54Z |
---
license: mit
base_model: silpakanneganti/roberta-cpt-medical-ner
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-cpt-medical-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-cpt-medical-ner
This model is a fine-tuned version of [silpakanneganti/roberta-cpt-medical-ner](https://huggingface.co/silpakanneganti/roberta-cpt-medical-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8502
- Precision: 0.0342
- Recall: 0.1849
- F1: 0.0577
- Accuracy: 0.1849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 25 | 0.8394 | 0.0342 | 0.1849 | 0.0577 | 0.1849 |
| No log | 2.0 | 50 | 0.8356 | 0.0342 | 0.1849 | 0.0577 | 0.1849 |
| No log | 3.0 | 75 | 0.8381 | 0.0342 | 0.1849 | 0.0577 | 0.1849 |
| No log | 4.0 | 100 | 0.8406 | 0.0342 | 0.1849 | 0.0577 | 0.1849 |
| No log | 5.0 | 125 | 0.8426 | 0.0342 | 0.1849 | 0.0577 | 0.1849 |
| No log | 6.0 | 150 | 0.8432 | 0.0342 | 0.1849 | 0.0577 | 0.1849 |
| No log | 7.0 | 175 | 0.8431 | 0.0342 | 0.1849 | 0.0577 | 0.1849 |
| No log | 8.0 | 200 | 0.8461 | 0.0342 | 0.1849 | 0.0577 | 0.1849 |
| No log | 9.0 | 225 | 0.8497 | 0.0342 | 0.1849 | 0.0577 | 0.1849 |
| No log | 10.0 | 250 | 0.8502 | 0.0342 | 0.1849 | 0.0577 | 0.1849 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.12.1
|
caiAtSNU/q-FrozenLake-v1-4x4-noSlippery
|
caiAtSNU
| 2023-08-09T11:27:18Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T11:18:23Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="caiAtSNU/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
simonycl/roberta-large-sst-2-16-13-30
|
simonycl
| 2023-08-09T11:19:47Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-09T11:16:53Z |
---
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-large-sst-2-16-13-30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-sst-2-16-13-30
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6901
- Accuracy: 0.625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 0.6957 | 0.5 |
| No log | 2.0 | 2 | 0.6955 | 0.5 |
| No log | 3.0 | 3 | 0.6952 | 0.5 |
| No log | 4.0 | 4 | 0.6944 | 0.5 |
| No log | 5.0 | 5 | 0.6937 | 0.5 |
| No log | 6.0 | 6 | 0.6933 | 0.5 |
| No log | 7.0 | 7 | 0.6929 | 0.5 |
| No log | 8.0 | 8 | 0.6942 | 0.5 |
| No log | 9.0 | 9 | 0.6931 | 0.5 |
| 0.6903 | 10.0 | 10 | 0.6917 | 0.5 |
| 0.6903 | 11.0 | 11 | 0.6905 | 0.5 |
| 0.6903 | 12.0 | 12 | 0.6891 | 0.5312 |
| 0.6903 | 13.0 | 13 | 0.6883 | 0.625 |
| 0.6903 | 14.0 | 14 | 0.6874 | 0.6562 |
| 0.6903 | 15.0 | 15 | 0.6849 | 0.5312 |
| 0.6903 | 16.0 | 16 | 0.6822 | 0.5312 |
| 0.6903 | 17.0 | 17 | 0.6790 | 0.5 |
| 0.6903 | 18.0 | 18 | 0.6742 | 0.5 |
| 0.6903 | 19.0 | 19 | 0.6650 | 0.5312 |
| 0.626 | 20.0 | 20 | 0.6524 | 0.5312 |
| 0.626 | 21.0 | 21 | 0.6444 | 0.5312 |
| 0.626 | 22.0 | 22 | 0.6361 | 0.5625 |
| 0.626 | 23.0 | 23 | 0.6327 | 0.5938 |
| 0.626 | 24.0 | 24 | 0.6337 | 0.625 |
| 0.626 | 25.0 | 25 | 0.6437 | 0.625 |
| 0.626 | 26.0 | 26 | 0.6580 | 0.6562 |
| 0.626 | 27.0 | 27 | 0.6725 | 0.6562 |
| 0.626 | 28.0 | 28 | 0.6812 | 0.625 |
| 0.626 | 29.0 | 29 | 0.6873 | 0.625 |
| 0.4393 | 30.0 | 30 | 0.6901 | 0.625 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.4.0
- Tokenizers 0.13.3
|
Hekenye/3d
|
Hekenye
| 2023-08-09T11:17:54Z | 4 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-09T11:05:28Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: A house in 3d rendering style
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Hekenye/3d
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on A house in 3d rendering style using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
|
sivasis-tripathy/Llama-2-7b-chat-midjourney-prompts-2
|
sivasis-tripathy
| 2023-08-09T11:15:35Z | 3 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T11:10:44Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
openerotica/Llama-2-13B-GPTQ
|
openerotica
| 2023-08-09T11:11:59Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-08-09T09:50:30Z |
---
inference: false
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Meta's Llama 2 13B fp16
These files are fp16 format model files for [Meta's Llama 2 13B](https://huggingface.co/meta-llama/Llama-2-13b-hf).
They were produced by downloading the PTH files from Meta, and then converting to HF format using the latest Transformers 4.32.0.dev0, from Git, with the Llama 2 PR included: https://github.com/huggingface/transformers/pull/24891.
Command to convert was:
```
python3 /workspace/venv/pytorch2/lib/python3.10/site-packages/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir /workspace/git/llama/download --model_size 13B --output_dir /workspace/process/llama-2-13b/source
```
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-13B-GPTQ)
* [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-13b-hf)
* [My fp16 conversion of the unquantised PTH model files](https://huggingface.co/TheBloke/Llama-2-13B-fp16)
## Prompt template: None
```
{prompt}
```
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Meta's Llama 2 13B
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
Deexit/swin-tiny-patch4-window7-224-finetuned-eurosat
|
Deexit
| 2023-08-09T11:05:30Z | 78 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"swin",
"image-classification",
"generated_from_keras_callback",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-09T10:28:03Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_keras_callback
model-index:
- name: Deexit/swin-tiny-patch4-window7-224-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Deexit/swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.9176
- Validation Loss: 3.2903
- Validation Accuracy: 0.0
- Epoch: 13
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:---------------:|:-------------------:|:-----:|
| 2.7734 | 2.7408 | 0.0 | 0 |
| 2.4056 | 2.7463 | 0.0 | 1 |
| 2.1880 | 2.7762 | 0.0 | 2 |
| 2.0477 | 2.8285 | 0.0 | 3 |
| 2.1556 | 2.8884 | 0.0 | 4 |
| 2.0269 | 2.9569 | 0.0 | 5 |
| 1.7258 | 3.0337 | 0.0 | 6 |
| 2.3555 | 3.1071 | 0.0 | 7 |
| 1.8657 | 3.1494 | 0.0 | 8 |
| 1.8121 | 3.1848 | 0.0 | 9 |
| 1.9192 | 3.2109 | 0.0 | 10 |
| 1.9925 | 3.2335 | 0.0 | 11 |
| 2.0157 | 3.2654 | 0.0 | 12 |
| 1.9176 | 3.2903 | 0.0 | 13 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.13.0
- Datasets 2.14.3
- Tokenizers 0.13.3
|
newronai/llama-2-7b-QLoRA-Trial2
|
newronai
| 2023-08-09T10:58:10Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T10:57:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
RebeccaKnudsen/falcon-7b-instruct-ft-adapters
|
RebeccaKnudsen
| 2023-08-09T10:52:31Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T10:52:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
jordyvl/vit-base_rvl-cdip-tiny_rvl_cdip-NK1000_simkd_rand
|
jordyvl
| 2023-08-09T10:48:04Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-08T21:40:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_rvl-cdip-tiny_rvl_cdip-NK1000_simkd_rand
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_rvl-cdip-tiny_rvl_cdip-NK1000_simkd_rand
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0667
- Accuracy: 0.5865
- Brier Loss: 0.5908
- Nll: 3.0393
- F1 Micro: 0.5865
- F1 Macro: 0.5890
- Ece: 0.1479
- Aurc: 0.2054
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| 0.0807 | 1.0 | 1000 | 0.0798 | 0.095 | 0.9362 | 7.0778 | 0.095 | 0.0517 | 0.0524 | 0.8510 |
| 0.0785 | 2.0 | 2000 | 0.0782 | 0.142 | 0.9268 | 6.5000 | 0.142 | 0.0892 | 0.0843 | 0.7446 |
| 0.0768 | 3.0 | 3000 | 0.0761 | 0.253 | 0.8945 | 4.3268 | 0.253 | 0.1827 | 0.1545 | 0.5697 |
| 0.0753 | 4.0 | 4000 | 0.0747 | 0.327 | 0.8672 | 3.7313 | 0.327 | 0.2733 | 0.2052 | 0.4558 |
| 0.074 | 5.0 | 5000 | 0.0739 | 0.359 | 0.8410 | 3.6965 | 0.359 | 0.2941 | 0.2102 | 0.4159 |
| 0.0729 | 6.0 | 6000 | 0.0725 | 0.3795 | 0.8104 | 3.2323 | 0.3795 | 0.3340 | 0.2147 | 0.3672 |
| 0.0718 | 7.0 | 7000 | 0.0717 | 0.4165 | 0.7806 | 3.1185 | 0.4165 | 0.3770 | 0.2186 | 0.3378 |
| 0.071 | 8.0 | 8000 | 0.0714 | 0.4175 | 0.7785 | 3.1984 | 0.4175 | 0.3999 | 0.2170 | 0.3408 |
| 0.0703 | 9.0 | 9000 | 0.0707 | 0.457 | 0.7563 | 2.8932 | 0.457 | 0.4310 | 0.2437 | 0.2965 |
| 0.0696 | 10.0 | 10000 | 0.0699 | 0.4665 | 0.7452 | 2.7889 | 0.4665 | 0.4529 | 0.2456 | 0.2828 |
| 0.0691 | 11.0 | 11000 | 0.0693 | 0.499 | 0.7219 | 2.7292 | 0.499 | 0.4756 | 0.2543 | 0.2579 |
| 0.0685 | 12.0 | 12000 | 0.0691 | 0.4955 | 0.7144 | 2.8807 | 0.4955 | 0.4734 | 0.2443 | 0.2515 |
| 0.068 | 13.0 | 13000 | 0.0688 | 0.5072 | 0.7096 | 2.6737 | 0.5072 | 0.4944 | 0.2525 | 0.2468 |
| 0.0675 | 14.0 | 14000 | 0.0685 | 0.513 | 0.6952 | 2.7492 | 0.513 | 0.5001 | 0.2404 | 0.2453 |
| 0.0669 | 15.0 | 15000 | 0.0682 | 0.5232 | 0.6855 | 2.7789 | 0.5232 | 0.5048 | 0.2441 | 0.2379 |
| 0.0664 | 16.0 | 16000 | 0.0680 | 0.529 | 0.6790 | 2.8249 | 0.529 | 0.5182 | 0.2366 | 0.2340 |
| 0.0658 | 17.0 | 17000 | 0.0678 | 0.5347 | 0.6668 | 2.7035 | 0.5347 | 0.5237 | 0.2338 | 0.2228 |
| 0.0652 | 18.0 | 18000 | 0.0676 | 0.5335 | 0.6673 | 2.8630 | 0.5335 | 0.5249 | 0.2319 | 0.2252 |
| 0.0651 | 19.0 | 19000 | 0.0675 | 0.5385 | 0.6524 | 2.7522 | 0.5385 | 0.5286 | 0.2172 | 0.2256 |
| 0.0645 | 20.0 | 20000 | 0.0671 | 0.5593 | 0.6454 | 2.7445 | 0.5593 | 0.5563 | 0.2324 | 0.2122 |
| 0.0639 | 21.0 | 21000 | 0.0672 | 0.5453 | 0.6541 | 2.9011 | 0.5453 | 0.5451 | 0.2236 | 0.2204 |
| 0.0634 | 22.0 | 22000 | 0.0668 | 0.5617 | 0.6398 | 2.8668 | 0.5617 | 0.5604 | 0.2264 | 0.2108 |
| 0.0629 | 23.0 | 23000 | 0.0670 | 0.5577 | 0.6295 | 2.8351 | 0.5577 | 0.5521 | 0.1984 | 0.2180 |
| 0.0625 | 24.0 | 24000 | 0.0666 | 0.5765 | 0.6201 | 2.7133 | 0.5765 | 0.5754 | 0.2138 | 0.2035 |
| 0.0618 | 25.0 | 25000 | 0.0666 | 0.565 | 0.6219 | 2.8775 | 0.565 | 0.5614 | 0.2010 | 0.2078 |
| 0.0613 | 26.0 | 26000 | 0.0664 | 0.5795 | 0.6121 | 2.8665 | 0.5795 | 0.5805 | 0.1996 | 0.2024 |
| 0.0606 | 27.0 | 27000 | 0.0667 | 0.5723 | 0.6101 | 2.9450 | 0.5723 | 0.5711 | 0.1804 | 0.2113 |
| 0.0603 | 28.0 | 28000 | 0.0664 | 0.583 | 0.6106 | 2.9126 | 0.583 | 0.5845 | 0.2004 | 0.2006 |
| 0.0597 | 29.0 | 29000 | 0.0665 | 0.5857 | 0.6050 | 2.9881 | 0.5857 | 0.5862 | 0.1912 | 0.2006 |
| 0.0594 | 30.0 | 30000 | 0.0665 | 0.5775 | 0.6043 | 2.9735 | 0.5775 | 0.5797 | 0.1823 | 0.2029 |
| 0.0589 | 31.0 | 31000 | 0.0666 | 0.5733 | 0.6080 | 2.9942 | 0.5733 | 0.5739 | 0.1721 | 0.2129 |
| 0.0585 | 32.0 | 32000 | 0.0667 | 0.5803 | 0.6066 | 3.0341 | 0.5803 | 0.5826 | 0.1748 | 0.2114 |
| 0.0583 | 33.0 | 33000 | 0.0665 | 0.5827 | 0.6033 | 3.0209 | 0.5827 | 0.5880 | 0.1799 | 0.2029 |
| 0.0578 | 34.0 | 34000 | 0.0667 | 0.577 | 0.6020 | 3.0483 | 0.577 | 0.5816 | 0.1636 | 0.2081 |
| 0.0576 | 35.0 | 35000 | 0.0667 | 0.577 | 0.6029 | 3.0263 | 0.577 | 0.5840 | 0.1573 | 0.2117 |
| 0.0574 | 36.0 | 36000 | 0.0667 | 0.5803 | 0.6006 | 3.0578 | 0.5803 | 0.5851 | 0.1627 | 0.2082 |
| 0.057 | 37.0 | 37000 | 0.0666 | 0.582 | 0.5997 | 3.1133 | 0.582 | 0.5867 | 0.1612 | 0.2094 |
| 0.0567 | 38.0 | 38000 | 0.0667 | 0.5817 | 0.5951 | 3.0727 | 0.5817 | 0.5836 | 0.1552 | 0.2091 |
| 0.0566 | 39.0 | 39000 | 0.0666 | 0.5815 | 0.5951 | 3.0308 | 0.5815 | 0.5853 | 0.1559 | 0.2049 |
| 0.0564 | 40.0 | 40000 | 0.0666 | 0.5853 | 0.5940 | 3.0629 | 0.5853 | 0.5880 | 0.1564 | 0.2057 |
| 0.0562 | 41.0 | 41000 | 0.0666 | 0.5845 | 0.5949 | 3.0956 | 0.5845 | 0.5881 | 0.1585 | 0.2055 |
| 0.0561 | 42.0 | 42000 | 0.0666 | 0.5827 | 0.5960 | 3.0679 | 0.5827 | 0.5876 | 0.1540 | 0.2098 |
| 0.0559 | 43.0 | 43000 | 0.0666 | 0.5833 | 0.5909 | 2.9904 | 0.5833 | 0.5854 | 0.1491 | 0.2049 |
| 0.0559 | 44.0 | 44000 | 0.0665 | 0.585 | 0.5915 | 3.0150 | 0.585 | 0.5876 | 0.1543 | 0.2032 |
| 0.0557 | 45.0 | 45000 | 0.0667 | 0.583 | 0.5923 | 3.0501 | 0.583 | 0.5851 | 0.1501 | 0.2056 |
| 0.0557 | 46.0 | 46000 | 0.0666 | 0.5905 | 0.5914 | 3.0110 | 0.5905 | 0.5940 | 0.1550 | 0.2045 |
| 0.0555 | 47.0 | 47000 | 0.0667 | 0.584 | 0.5922 | 3.0464 | 0.584 | 0.5872 | 0.1497 | 0.2069 |
| 0.0555 | 48.0 | 48000 | 0.0667 | 0.588 | 0.5917 | 3.0408 | 0.588 | 0.5919 | 0.1489 | 0.2051 |
| 0.0554 | 49.0 | 49000 | 0.0667 | 0.589 | 0.5908 | 3.0433 | 0.589 | 0.5923 | 0.1496 | 0.2044 |
| 0.0554 | 50.0 | 50000 | 0.0667 | 0.5865 | 0.5908 | 3.0393 | 0.5865 | 0.5890 | 0.1479 | 0.2054 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Hekenye/cartoon
|
Hekenye
| 2023-08-09T10:42:22Z | 2 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-09T10:28:16Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: A woman walking a dog in flat cartoon illustration style
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Hekenye/cartoon
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on A woman walking a dog in flat cartoon illustration style using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
|
DragosGorduza/fiqa_1400_gpl_trained
|
DragosGorduza
| 2023-08-09T10:31:44Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-08-09T09:58:31Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2800 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 1400,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
KallistiTMR/llama-2-7b-chat-wiz-k16-9
|
KallistiTMR
| 2023-08-09T10:27:13Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T02:47:47Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
|
Evan-Lin/Bart-abs-amazon-entailment-50
|
Evan-Lin
| 2023-08-09T10:25:26Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-08-09T10:17:54Z |
---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Evan-Lin//tmp/tmpg0xs53u8/Evan-Lin/Bart-abs-amazon-entailment-50")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmpg0xs53u8/Evan-Lin/Bart-abs-amazon-entailment-50")
model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmpg0xs53u8/Evan-Lin/Bart-abs-amazon-entailment-50")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.