modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-10 18:30:15
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 553
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-10 18:29:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
aroot/eng-mya-simcse_longestplus_ssrb
|
aroot
| 2023-07-07T06:58:00Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T06:36:57Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse_longestplus_ssrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse_longestplus_ssrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8875
- Bleu: 4.1475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-mya-simcse_longest_ssrb
|
aroot
| 2023-07-07T06:54:46Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T06:33:36Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse_longest_ssrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse_longest_ssrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8458
- Bleu: 4.2571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
IIC/xlm-roberta-large-socialdisner
|
IIC
| 2023-07-07T06:43:47Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"biomedical",
"clinical",
"spanish",
"xlm-roberta-large",
"token-classification",
"es",
"dataset:IIC/socialdisner",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-26T08:01:49Z |
---
language: es
tags:
- biomedical
- clinical
- spanish
- xlm-roberta-large
license: mit
datasets:
- "IIC/socialdisner"
metrics:
- f1
model-index:
- name: IIC/xlm-roberta-large-socialdisner
results:
- task:
type: token-classification
dataset:
name: socialdisner
type: IIC/socialdisner
split: test
metrics:
- name: f1
type: f1
value: 0.941
pipeline_tag: token-classification
---
# xlm-roberta-large-socialdisner
This model is a finetuned version of xlm-roberta-large for the socialdisner dataset used in a benchmark in the paper TODO. The model has a F1 of 0.941
Please refer to the original publication for more information TODO LINK
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 64 |
| learning rate | 3e-05 |
| classifier dropout | 0.2 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtex
TODO
```
|
IIC/mdeberta-v3-base-nubes
|
IIC
| 2023-07-07T06:43:39Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"biomedical",
"clinical",
"spanish",
"mdeberta-v3-base",
"token-classification",
"es",
"dataset:plncmm/nubes",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-21T15:55:52Z |
---
language: es
tags:
- biomedical
- clinical
- spanish
- mdeberta-v3-base
license: mit
datasets:
- "plncmm/nubes"
metrics:
- f1
model-index:
- name: IIC/mdeberta-v3-base-nubes
results:
- task:
type: token-classification
dataset:
name: nubes
type: plncmm/nubes
split: test
metrics:
- name: f1
type: f1
value: 0.919
pipeline_tag: token-classification
---
# mdeberta-v3-base-nubes
This model is a finetuned version of mdeberta-v3-base for the nubes dataset used in a benchmark in the paper TODO. The model has a F1 of 0.919
Please refer to the original publication for more information TODO LINK
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 32 |
| learning rate | 3e-05 |
| classifier dropout | 0 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtex
TODO
```
|
IIC/XLM_R_Galen-socialdisner
|
IIC
| 2023-07-07T06:43:35Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"biomedical",
"clinical",
"spanish",
"XLM_R_Galen",
"token-classification",
"es",
"dataset:IIC/socialdisner",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-26T08:06:59Z |
---
language: es
tags:
- biomedical
- clinical
- spanish
- XLM_R_Galen
license: mit
datasets:
- "IIC/socialdisner"
metrics:
- f1
model-index:
- name: IIC/XLM_R_Galen-socialdisner
results:
- task:
type: token-classification
dataset:
name: socialdisner
type: IIC/socialdisner
split: test
metrics:
- name: f1
type: f1
value: 0.919
pipeline_tag: token-classification
---
# XLM_R_Galen-socialdisner
This model is a finetuned version of XLM_R_Galen for the socialdisner dataset used in a benchmark in the paper TODO. The model has a F1 of 0.919
Please refer to the original publication for more information TODO LINK
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 16 |
| learning rate | 4e-05 |
| classifier dropout | 0.1 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtex
TODO
```
|
squeeze-ai-lab/sq-opt-13b-w4-s50
|
squeeze-ai-lab
| 2023-07-07T06:43:20Z | 0 | 0 | null |
[
"arxiv:2306.07629",
"arxiv:2205.01068",
"region:us"
] | null | 2023-07-07T05:51:15Z |
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit quantized OPT 13B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [OPT 13B](https://arxiv.org/abs/2205.01068)
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0.5%
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
squeeze-ai-lab/sq-opt-13b-w3-s50
|
squeeze-ai-lab
| 2023-07-07T06:42:22Z | 0 | 0 | null |
[
"arxiv:2306.07629",
"arxiv:2205.01068",
"region:us"
] | null | 2023-07-07T05:51:11Z |
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
3-bit quantized OPT 13B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [OPT 13B](https://arxiv.org/abs/2205.01068)
* **Bitwidth:** 3-bit
* **Sparsity Level:** 0.5%
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
madrylab/robust-imagenet-models
|
madrylab
| 2023-07-07T06:42:07Z | 0 | 1 | null |
[
"arxiv:2007.08489",
"license:mit",
"region:us"
] | null | 2023-07-07T06:05:27Z |
---
license: mit
---
## Overview
This repository contains the robust ImageNet models used in our paper ["Do adversarially robust imagenet models transfer better?"](https://arxiv.org/abs/2007.08489).
See our papers's [GitHub repository](https://github.com/microsoft/robust-models-transfer) for more details!
## Summary of our pretrained models
#### Standard Accuracy of L2-Robust ImageNet Models
|Model|ε=0|ε=0.01|ε=0.03|ε=0.05|ε=0.1|ε=0.25|ε=0.5|ε=1.0|ε=3.0|ε=5.0|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|ResNet-18 |[69.79](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_l2_eps0.ckpt) | [69.90](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_l2_eps0.01.ckpt) | [69.24](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_l2_eps0.03.ckpt) | [69.15](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_l2_eps0.05.ckpt) | [68.77](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_l2_eps0.1.ckpt) | [67.43](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_l2_eps0.25.ckpt) | [65.49](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_l2_eps0.5.ckpt) | [62.32](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_l2_eps1.ckpt) | [53.12](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_l2_eps3.ckpt) | [45.59](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_l2_eps5.ckpt)
ResNet-50|[75.80](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_l2_eps0.ckpt) | [75.68](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_l2_eps0.01.ckpt) | [75.76](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_l2_eps0.03.ckpt) | [75.59](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_l2_eps0.05.ckpt) | [74.78](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_l2_eps0.1.ckpt) | [74.14](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_l2_eps0.25.ckpt) | [73.16](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_l2_eps0.5.ckpt) | [70.43](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_l2_eps1.ckpt) | [62.83](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_l2_eps3.ckpt) | [56.13](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_l2_eps5.ckpt)
Wide-ResNet-50-2|[76.97](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_l2_eps0.ckpt) | [77.25](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_l2_eps0.01.ckpt) | [77.26](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_l2_eps0.03.ckpt) | [77.17](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_l2_eps0.05.ckpt) | [76.74](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_l2_eps0.1.ckpt) | [76.21](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_l2_eps0.25.ckpt) | [75.11](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_l2_eps0.5.ckpt) | [73.41](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_l2_eps1.ckpt) | [66.90](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_l2_eps3.ckpt) | [60.94](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_l2_eps5.ckpt)
Wide-ResNet-50-4|[77.91](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_4_l2_eps0.ckpt) |[78.02](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_4_l2_eps0.01.ckpt)|[77.87](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_4_l2_eps0.03.ckpt)|[77.77](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_4_l2_eps0.05.ckpt)|[77.64](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_4_l2_eps0.1.ckpt)|[77.10](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_4_l2_eps0.25.ckpt)|[76.52](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_4_l2_eps0.5.ckpt)| [75.51](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_4_l2_eps1.ckpt) | [69.67](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_4_l2_eps3.ckpt)|[65.20](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_4_l2_eps5.ckpt)
|Model | ε=0|ε=3|
|:-----:|:-----:|:-----:|
DenseNet |[77.37](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/densenet_l2_eps0.ckpt) | [66.98](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/densenet_l2_eps3.ckpt)
MNASNET|[60.97](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/mnasnet_l2_eps0.ckpt) | [41.83](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/mnasnet_l2_eps3.ckpt)
MobileNet-v2|[65.26](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/mobilenet_l2_eps0.ckpt) | [50.40](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/mobilenet_l2_eps3.ckpt)
ResNeXt50_32x4d|[77.38](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnext50_32x4d_l2_eps0.ckpt) | [66.25](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnext50_32x4d_l2_eps3.ckpt)
ShuffleNet|[64.25](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/shufflenet_l2_eps0.ckpt) | [43.32](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/shufflenet_l2_eps3.ckpt)
VGG16_bn|[73.66](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/vgg16_bn_l2_eps0.ckpt) | [57.19](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/vgg16_bn_l2_eps3.ckpt)
#### Standard Accuracy of Linf-Robust ImageNet Models
|Model|ε=0.5/255|ε=1/255|ε=2/255|ε=4/255|ε=8/255|
|---|:---:|:---:|:---:|:---:|:---:|
|ResNet-18|[66.13](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_linf_eps0.5.ckpt) | [63.46](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_linf_eps1.0.ckpt) | [59.63](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_linf_eps2.0.ckpt) | [52.49](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_linf_eps4.0.ckpt) | [42.11](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet18_linf_eps8.0.ckpt)
ResNet-50 |[73.73](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_linf_eps0.5.ckpt) | [72.05](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_linf_eps1.0.ckpt) | [69.10](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_linf_eps2.0.ckpt) | [63.86](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_linf_eps4.0.ckpt) | [54.53](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/resnet50_linf_eps8.0.ckpt)
Wide-ResNet-50-2 |[75.82](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_linf_eps0.5.ckpt) | [74.65](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_linf_eps1.0.ckpt) | [72.35](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_linf_eps2.0.ckpt) | [68.41](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_linf_eps4.0.ckpt) | [60.82](https://huggingface.co/madrylab/robust-imagenet-models/resolve/main/wide_resnet50_2_linf_eps8.0.ckpt)
|
nullland1027/epicRealism-pureEvolution-v3
|
nullland1027
| 2023-07-07T06:27:31Z | 0 | 0 | null |
[
"text-to-image",
"en",
"license:openrail",
"region:us"
] |
text-to-image
| 2023-07-07T06:26:24Z |
---
license: openrail
language:
- en
pipeline_tag: text-to-image
---
|
aroot/eng-guj-simcse_longest_usrb
|
aroot
| 2023-07-07T06:21:26Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T05:59:25Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_longest_usrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_longest_usrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2361
- Bleu: 2.8995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
YakovElm/Qt_15_BERT_More_Properties
|
YakovElm
| 2023-07-07T06:19:38Z | 66 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-07T06:19:03Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt_15_BERT_More_Properties
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt_15_BERT_More_Properties
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2332
- Train Accuracy: 0.9367
- Validation Loss: 0.1937
- Validation Accuracy: 0.9505
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2409 | 0.9367 | 0.2001 | 0.9505 | 0 |
| 0.2357 | 0.9367 | 0.1992 | 0.9505 | 1 |
| 0.2332 | 0.9367 | 0.1937 | 0.9505 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-guj-simcse_longestplus_ssrb
|
aroot
| 2023-07-07T06:17:17Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T05:55:48Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_longestplus_ssrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_longestplus_ssrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2951
- Bleu: 2.8469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-guj-simcse_longestplus_usrb
|
aroot
| 2023-07-07T06:17:14Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T05:55:39Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_longestplus_usrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_longestplus_usrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2932
- Bleu: 2.5921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
squeeze-ai-lab/sq-opt-2.7b-w4-s50
|
squeeze-ai-lab
| 2023-07-07T06:14:28Z | 0 | 0 | null |
[
"arxiv:2306.07629",
"arxiv:2205.01068",
"region:us"
] | null | 2023-07-07T05:50:23Z |
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit quantized OPT 2.7B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [OPT 2.7B](https://arxiv.org/abs/2205.01068)
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0.5%
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
aroot/eng-guj-simcse_longest_ssrb
|
aroot
| 2023-07-07T06:13:25Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T05:51:39Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_longest_ssrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_longest_ssrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2274
- Bleu: 2.9771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
pratikg123/falcon-7b
|
pratikg123
| 2023-07-07T06:08:08Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-07T06:06:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
linlinlin/ppo_model
|
linlinlin
| 2023-07-07T05:54:27Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-07-07T05:53:19Z |
---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="linlinlin//tmp/tmpvc7_208u/linlinlin/ppo_model")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("linlinlin//tmp/tmpvc7_208u/linlinlin/ppo_model")
model = AutoModelForCausalLMWithValueHead.from_pretrained("linlinlin//tmp/tmpvc7_208u/linlinlin/ppo_model")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
squeeze-ai-lab/sq-opt-1.3b-w4-s50
|
squeeze-ai-lab
| 2023-07-07T05:53:53Z | 0 | 0 | null |
[
"arxiv:2306.07629",
"arxiv:2205.01068",
"region:us"
] | null | 2023-07-07T05:46:54Z |
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit quantized OPT 1.3B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [OPT 1.3B](https://arxiv.org/abs/2205.01068)
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0.5%
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
squeeze-ai-lab/sq-opt-30b-w4-s50
|
squeeze-ai-lab
| 2023-07-07T05:53:48Z | 0 | 0 | null |
[
"arxiv:2306.07629",
"arxiv:2205.01068",
"region:us"
] | null | 2023-07-07T05:53:10Z |
**SqueezeLLM** is a post-training quantization framework that incorporates a new method called Dense-and-Sparse Quantization to enable efficient LLM serving.
**TLDR:** Deploying LLMs is difficult due to their large memory size. This can be addressed with reduced precision quantization.
But a naive method hurts performance. We address this with a new Dense-and-Sparse Quantization method.
Dense-and-Sparse splits weight matrices into two components: A dense component that can be heavily quantized without affecting model performance,
as well as a sparse part that preserves sensitive and outlier parts of the weight matrices With this approach,
we are able to serve larger models with smaller memory footprint, the same latency, and yet higher accuracy and quality.
For more details please check out our [paper](https://arxiv.org/pdf/2306.07629.pdf).
## Model description
4-bit quantized OPT 30B model using SqueezeLLM. More details can be found in the [paper](https://arxiv.org/pdf/2306.07629.pdf).
* **Base Model:** [OPT 30B](https://arxiv.org/abs/2205.01068)
* **Bitwidth:** 4-bit
* **Sparsity Level:** 0.5%
## Links
* **Paper**: [https://arxiv.org/pdf/2306.07629.pdf](https://arxiv.org/pdf/2306.07629.pdf)
* **Code**: [https://github.com/SqueezeAILab/SqueezeLLM](https://github.com/SqueezeAILab/SqueezeLLM)
---
license: other
---
|
aroot/eng-fra-simcse_longest_usrb
|
aroot
| 2023-07-07T05:45:42Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T05:26:29Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_longest_usrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_longest_usrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1288
- Bleu: 32.5247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
remshu-inc/mmark
|
remshu-inc
| 2023-07-07T05:43:28Z | 3 | 0 |
tf-keras
|
[
"tf-keras",
"license:mit",
"region:us"
] | null | 2023-07-05T11:06:24Z |
---
license: mit
---
Модель предназначена для решения задачи определения оценки за ученический текст на немецком языке.
Модель представляет собой полносвязную нейронную сеть с 10-ю входными нейронами, 25 нейронами в первом скрытом слое, 11 нейронами во втором скрытом слое, 4 – нейронами в третьем скрытом слое, 1 нейроном в выходном слое.
На вход поступают нормированные на количество токенов в тексте значения:
* количество грамматических ошибок в тексте;
* количество лексических ошибок в тексте;
* количество синтаксических ошибок в тексте;
* количество орфографических ошибок в тексте;
* количество дискурсивных ошибок в тексте;
* количество пропусков слов в тексте;
* количество лишних слов в тексте;
* количество ошибок с уровнем грубости 1;
* количество ошибок с уровнем грубости 2;
* количество ошибок с уровнем грубости 3.
На выходе модель выдает значение оценки за текст по 12-балльной шкале. 1 соответствует минимальной оценке, 12 -- максимальной.
Для работы с моделью рекомендуется использовать библиотеку [remshu-inc/pakt-work-tools](https://github.com/remshu-inc/pakt-work-tools).
|
remshu-inc/mencoder
|
remshu-inc
| 2023-07-07T05:42:25Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-05T10:42:07Z |
---
license: mit
---
Модель предназначена для решения задачи определения грубости ошибки в предложении ученического текста на немецком языке.
Модель была получена в результате дообучения модели «[dbmdz/convbert-base-german-europeana-cased](https://huggingface.co/dbmdz/convbert-base-german-europeana-cased)» на данных корпуса [ПАКТ](https://pact.ai.petrsu.ru/app).
На вход модели поступают два предложения на немецком языке. Первое предложение с ошибкой, второе -- с исправленной ошибкой. Модель выдает значение близости двух предложений. Если выданное значение близко к 0,98, то считается, что ошибка не влияет на понимание смысла предложения (грубость уровня 1), если выданное значение близко к 0,93, то считается, что ошибка ухудшает понимание смысла предложения (грубость уровня 2), если выданное значение близко к 0,87, то считается, что смысл предложения непонятен или искажен (грубость уровня 3).
Для работы с моделью рекомендуется использовать библиотеку [remshu-inc/pakt-work-tools](https://github.com/remshu-inc/pakt-work-tools).
|
aroot/eng-fra-simcse_longest_ssrb
|
aroot
| 2023-07-07T05:37:44Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T05:19:00Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_longest_ssrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_longest_ssrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1262
- Bleu: 32.1631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
happyduck/alcafa_5.8b_8000
|
happyduck
| 2023-07-07T05:26:19Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-07T05:26:13Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
Shularp/Helsinki_en-mul_test_01
|
Shularp
| 2023-07-07T05:12:24Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-07T03:13:03Z |
---
tags:
- generated_from_trainer
model-index:
- name: Helsinki_en-mul_test_01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Helsinki_en-mul_test_01
This model is a fine-tuned version of [Shularp/Helsinki_en-mul_test](https://huggingface.co/Shularp/Helsinki_en-mul_test) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2373 | 1.0 | 4777 | 1.1392 |
| 1.1799 | 2.0 | 9554 | 1.0504 |
| 0.984 | 3.0 | 14331 | 1.0276 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dennischui/Reinforce-CartPole-v1
|
dennischui
| 2023-07-07T05:02:59Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T03:24:43Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 410.60 +/- 178.84
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
TeaTM/DialoGPT-small-bushcat
|
TeaTM
| 2023-07-07T04:52:37Z | 131 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"DialoGPT",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-19T22:25:09Z |
---
tags:
- conversational
- DialoGPT
language:
- en
---
# Bushcat DialoGPT-small Model
A smaller personified DialoGPT fork for a side project. Conversational for an entertainment chatbot.
Smaller model based on DialoGPT-small. Recommended to use the **TeaTM/DialoGPT-large-bushcat** model on my Hugging Face page.
The large model is bigger in size but also significantly smarter.
The character plays the persona of a cat in a bush that is overly positive. Just for fun. Has high perplexity, be warned.
Works great in Transformers & PyTorch.
# NOTE: This model is no longer being updated. There are better models and frameworks for custom, smarter characters.
# This is mostly "for fun" and is fairly lightweight compared to larger models. Good for small test projects.
|
nomsgadded/textual_inversion
|
nomsgadded
| 2023-07-07T04:31:51Z | 29 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-07T03:42:52Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - nomsgadded/textual_inversion
These are textual inversion adaption weights for CompVis/stable-diffusion-v1-4. You can find some example images in the following.
|
YakovElm/Qt_5_BERT_More_Properties
|
YakovElm
| 2023-07-07T04:28:37Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-07T04:27:58Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt_5_BERT_More_Properties
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt_5_BERT_More_Properties
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3382
- Train Accuracy: 0.8943
- Validation Loss: 0.2633
- Validation Accuracy: 0.9294
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3488 | 0.8862 | 0.2583 | 0.9294 | 0 |
| 0.3401 | 0.8943 | 0.2680 | 0.9294 | 1 |
| 0.3382 | 0.8943 | 0.2633 | 0.9294 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-guj-simcse_longest_usblu
|
aroot
| 2023-07-07T04:27:50Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T04:05:39Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_longest_usblu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_longest_usblu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2266
- Bleu: 2.8360
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-guj-simcse_longestplus_ssblu
|
aroot
| 2023-07-07T04:23:37Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T04:01:39Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_longestplus_ssblu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_longestplus_ssblu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2846
- Bleu: 2.6912
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-guj-simcse_longest_ssblu
|
aroot
| 2023-07-07T04:23:29Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T04:01:17Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_longest_ssblu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_longest_ssblu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2244
- Bleu: 2.9211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
l3cube-pune/MarathiSentiment
|
l3cube-pune
| 2023-07-07T04:01:02Z | 118 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"albert",
"text-classification",
"mr",
"dataset:L3CubeMahaSent",
"arxiv:2103.11408",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: mr
tags:
- albert
license: cc-by-4.0
datasets:
- L3CubeMahaSent
widget:
- text: "I like you. </s></s> I love you."
---
## MarathiSentiment
** An updated and better version of this model covering multiple domains is shared here: <a href="https://huggingface.co/l3cube-pune/marathi-sentiment-md"> marathi-sentiment-md </a> ** <br>
MarathiSentiment is an IndicBERT(ai4bharat/indic-bert) model fine-tuned on L3CubeMahaSent - a Marathi tweet-based sentiment analysis dataset.
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (http://arxiv.org/abs/2103.11408)
```
@inproceedings{kulkarni2021l3cubemahasent,
title={L3CubeMahaSent: A Marathi Tweet-based Sentiment Analysis Dataset},
author={Kulkarni, Atharva and Mandhane, Meet and Likhitkar, Manali and Kshirsagar, Gayatri and Joshi, Raviraj},
booktitle={Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis},
pages={213--220},
year={2021}
}
```
|
aroot/eng-fra-simcse_longestplus_usblu
|
aroot
| 2023-07-07T03:51:24Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T03:32:12Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_longestplus_usblu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_longestplus_usblu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1299
- Bleu: 32.3457
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
chasmiccoder/ppo-LunarLander-v2
|
chasmiccoder
| 2023-07-07T03:47:17Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T03:46:56Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.82 +/- 17.14
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jordyvl/dit-base_tobacco
|
jordyvl
| 2023-07-07T03:42:16Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-06T13:14:21Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-base_tobacco
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base_tobacco
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3120
- Accuracy: 0.95
- Brier Loss: 0.0965
- Nll: 0.6372
- F1 Micro: 0.9500
- F1 Macro: 0.9545
- Ece: 0.0560
- Aurc: 0.0092
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 6 | 2.4454 | 0.175 | 0.9193 | 8.6626 | 0.175 | 0.0676 | 0.2489 | 0.8592 |
| No log | 1.96 | 12 | 2.3287 | 0.175 | 0.9034 | 7.2049 | 0.175 | 0.0674 | 0.2590 | 0.8557 |
| No log | 2.96 | 18 | 2.0836 | 0.23 | 0.8528 | 3.3114 | 0.23 | 0.1544 | 0.2652 | 0.7357 |
| No log | 3.96 | 24 | 2.0456 | 0.315 | 0.8435 | 3.8932 | 0.315 | 0.1785 | 0.3010 | 0.6372 |
| No log | 4.96 | 30 | 1.8778 | 0.3 | 0.7820 | 3.0975 | 0.3 | 0.1659 | 0.2985 | 0.5174 |
| No log | 5.96 | 36 | 1.7247 | 0.365 | 0.7305 | 2.7808 | 0.3650 | 0.2235 | 0.2507 | 0.4036 |
| No log | 6.96 | 42 | 1.6610 | 0.38 | 0.7183 | 2.6958 | 0.38 | 0.2449 | 0.2538 | 0.4119 |
| No log | 7.96 | 48 | 1.4667 | 0.505 | 0.6417 | 2.4078 | 0.505 | 0.3653 | 0.2881 | 0.2656 |
| No log | 8.96 | 54 | 1.3427 | 0.58 | 0.6031 | 2.0381 | 0.58 | 0.5304 | 0.2885 | 0.2470 |
| No log | 9.96 | 60 | 1.1586 | 0.635 | 0.5217 | 1.8792 | 0.635 | 0.5496 | 0.2831 | 0.1697 |
| No log | 10.96 | 66 | 1.0108 | 0.71 | 0.4578 | 1.6886 | 0.7100 | 0.6273 | 0.2851 | 0.1340 |
| No log | 11.96 | 72 | 0.8648 | 0.75 | 0.3849 | 1.5408 | 0.75 | 0.6788 | 0.2530 | 0.0801 |
| No log | 12.96 | 78 | 0.7342 | 0.79 | 0.3327 | 1.3588 | 0.79 | 0.7264 | 0.2152 | 0.0575 |
| No log | 13.96 | 84 | 0.6024 | 0.835 | 0.2734 | 1.2694 | 0.835 | 0.7937 | 0.1876 | 0.0429 |
| No log | 14.96 | 90 | 0.5143 | 0.85 | 0.2386 | 1.1756 | 0.85 | 0.8175 | 0.1714 | 0.0363 |
| No log | 15.96 | 96 | 0.4429 | 0.865 | 0.2044 | 1.1080 | 0.865 | 0.8435 | 0.1380 | 0.0277 |
| No log | 16.96 | 102 | 0.3999 | 0.885 | 0.1854 | 1.0748 | 0.885 | 0.8673 | 0.1407 | 0.0274 |
| No log | 17.96 | 108 | 0.3635 | 0.88 | 0.1732 | 1.0361 | 0.88 | 0.8594 | 0.1117 | 0.0247 |
| No log | 18.96 | 114 | 0.3166 | 0.89 | 0.1454 | 1.0855 | 0.89 | 0.8682 | 0.0971 | 0.0196 |
| No log | 19.96 | 120 | 0.3137 | 0.905 | 0.1418 | 1.1614 | 0.905 | 0.8934 | 0.1041 | 0.0195 |
| No log | 20.96 | 126 | 0.3207 | 0.91 | 0.1408 | 1.1941 | 0.91 | 0.9002 | 0.0856 | 0.0198 |
| No log | 21.96 | 132 | 0.2753 | 0.925 | 0.1224 | 1.0928 | 0.925 | 0.9209 | 0.0858 | 0.0145 |
| No log | 22.96 | 138 | 0.2538 | 0.925 | 0.1169 | 1.0895 | 0.925 | 0.9187 | 0.0863 | 0.0111 |
| No log | 23.96 | 144 | 0.2691 | 0.935 | 0.1138 | 1.0767 | 0.935 | 0.9279 | 0.0730 | 0.0149 |
| No log | 24.96 | 150 | 0.2775 | 0.935 | 0.1131 | 1.0538 | 0.935 | 0.9292 | 0.0676 | 0.0157 |
| No log | 25.96 | 156 | 0.2544 | 0.94 | 0.1011 | 1.0266 | 0.94 | 0.9292 | 0.0643 | 0.0131 |
| No log | 26.96 | 162 | 0.2637 | 0.945 | 0.1013 | 1.0337 | 0.945 | 0.9384 | 0.0648 | 0.0150 |
| No log | 27.96 | 168 | 0.2787 | 0.94 | 0.1089 | 1.0202 | 0.94 | 0.9348 | 0.0685 | 0.0161 |
| No log | 28.96 | 174 | 0.2794 | 0.935 | 0.1091 | 1.0099 | 0.935 | 0.9306 | 0.0671 | 0.0143 |
| No log | 29.96 | 180 | 0.2631 | 0.935 | 0.1025 | 0.9815 | 0.935 | 0.9306 | 0.0575 | 0.0129 |
| No log | 30.96 | 186 | 0.2616 | 0.945 | 0.1009 | 0.9683 | 0.945 | 0.9401 | 0.0674 | 0.0120 |
| No log | 31.96 | 192 | 0.2726 | 0.935 | 0.1074 | 0.9598 | 0.935 | 0.9346 | 0.0641 | 0.0100 |
| No log | 32.96 | 198 | 0.2765 | 0.935 | 0.1058 | 0.9067 | 0.935 | 0.9321 | 0.0696 | 0.0101 |
| No log | 33.96 | 204 | 0.2662 | 0.95 | 0.0965 | 0.8891 | 0.9500 | 0.9522 | 0.0672 | 0.0120 |
| No log | 34.96 | 210 | 0.2761 | 0.935 | 0.1019 | 0.8893 | 0.935 | 0.9338 | 0.0597 | 0.0134 |
| No log | 35.96 | 216 | 0.2729 | 0.945 | 0.0961 | 0.8807 | 0.945 | 0.9419 | 0.0552 | 0.0119 |
| No log | 36.96 | 222 | 0.2741 | 0.94 | 0.1037 | 0.8782 | 0.94 | 0.9356 | 0.0645 | 0.0086 |
| No log | 37.96 | 228 | 0.2686 | 0.94 | 0.0994 | 0.8423 | 0.94 | 0.9356 | 0.0592 | 0.0085 |
| No log | 38.96 | 234 | 0.2712 | 0.95 | 0.0906 | 0.8179 | 0.9500 | 0.9545 | 0.0610 | 0.0105 |
| No log | 39.96 | 240 | 0.2644 | 0.95 | 0.0870 | 0.8240 | 0.9500 | 0.9443 | 0.0510 | 0.0110 |
| No log | 40.96 | 246 | 0.2653 | 0.95 | 0.0932 | 0.8386 | 0.9500 | 0.9525 | 0.0572 | 0.0118 |
| No log | 41.96 | 252 | 0.2724 | 0.955 | 0.0939 | 0.8369 | 0.955 | 0.9573 | 0.0602 | 0.0104 |
| No log | 42.96 | 258 | 0.2552 | 0.95 | 0.0868 | 0.8079 | 0.9500 | 0.9522 | 0.0539 | 0.0079 |
| No log | 43.96 | 264 | 0.2629 | 0.95 | 0.0879 | 0.7800 | 0.9500 | 0.9545 | 0.0526 | 0.0080 |
| No log | 44.96 | 270 | 0.2664 | 0.955 | 0.0864 | 0.7660 | 0.955 | 0.9575 | 0.0515 | 0.0086 |
| No log | 45.96 | 276 | 0.2777 | 0.945 | 0.0948 | 0.7670 | 0.945 | 0.9513 | 0.0524 | 0.0096 |
| No log | 46.96 | 282 | 0.2824 | 0.94 | 0.1014 | 0.7799 | 0.94 | 0.9436 | 0.0570 | 0.0093 |
| No log | 47.96 | 288 | 0.2699 | 0.95 | 0.0896 | 0.7706 | 0.9500 | 0.9546 | 0.0528 | 0.0087 |
| No log | 48.96 | 294 | 0.2809 | 0.945 | 0.0950 | 0.7691 | 0.945 | 0.9480 | 0.0475 | 0.0087 |
| No log | 49.96 | 300 | 0.2827 | 0.945 | 0.0940 | 0.7635 | 0.945 | 0.9447 | 0.0571 | 0.0091 |
| No log | 50.96 | 306 | 0.2781 | 0.945 | 0.0921 | 0.7591 | 0.945 | 0.9478 | 0.0552 | 0.0090 |
| No log | 51.96 | 312 | 0.2834 | 0.95 | 0.0946 | 0.7572 | 0.9500 | 0.9484 | 0.0549 | 0.0089 |
| No log | 52.96 | 318 | 0.2986 | 0.94 | 0.0994 | 0.7541 | 0.94 | 0.9363 | 0.0605 | 0.0091 |
| No log | 53.96 | 324 | 0.2957 | 0.94 | 0.1016 | 0.7447 | 0.94 | 0.9385 | 0.0562 | 0.0086 |
| No log | 54.96 | 330 | 0.2991 | 0.94 | 0.1047 | 0.7392 | 0.94 | 0.9377 | 0.0592 | 0.0102 |
| No log | 55.96 | 336 | 0.3027 | 0.94 | 0.1031 | 0.7235 | 0.94 | 0.9377 | 0.0572 | 0.0113 |
| No log | 56.96 | 342 | 0.2945 | 0.945 | 0.0968 | 0.7143 | 0.945 | 0.9470 | 0.0581 | 0.0104 |
| No log | 57.96 | 348 | 0.2935 | 0.94 | 0.0955 | 0.7046 | 0.94 | 0.9459 | 0.0569 | 0.0097 |
| No log | 58.96 | 354 | 0.2909 | 0.94 | 0.0934 | 0.6969 | 0.94 | 0.9459 | 0.0544 | 0.0092 |
| No log | 59.96 | 360 | 0.2973 | 0.95 | 0.0939 | 0.6964 | 0.9500 | 0.9545 | 0.0524 | 0.0082 |
| No log | 60.96 | 366 | 0.3222 | 0.93 | 0.1108 | 0.7078 | 0.93 | 0.9266 | 0.0586 | 0.0088 |
| No log | 61.96 | 372 | 0.3247 | 0.935 | 0.1093 | 0.7743 | 0.935 | 0.9353 | 0.0622 | 0.0091 |
| No log | 62.96 | 378 | 0.3125 | 0.945 | 0.1003 | 0.7651 | 0.945 | 0.9453 | 0.0559 | 0.0089 |
| No log | 63.96 | 384 | 0.3035 | 0.945 | 0.0993 | 0.7515 | 0.945 | 0.9476 | 0.0545 | 0.0088 |
| No log | 64.96 | 390 | 0.3002 | 0.945 | 0.0973 | 0.7408 | 0.945 | 0.9476 | 0.0537 | 0.0091 |
| No log | 65.96 | 396 | 0.3023 | 0.95 | 0.0965 | 0.7321 | 0.9500 | 0.9545 | 0.0523 | 0.0095 |
| No log | 66.96 | 402 | 0.3075 | 0.945 | 0.1007 | 0.7323 | 0.945 | 0.9477 | 0.0540 | 0.0096 |
| No log | 67.96 | 408 | 0.3062 | 0.945 | 0.0999 | 0.6682 | 0.945 | 0.9514 | 0.0525 | 0.0098 |
| No log | 68.96 | 414 | 0.3182 | 0.945 | 0.0968 | 0.6809 | 0.945 | 0.9432 | 0.0485 | 0.0115 |
| No log | 69.96 | 420 | 0.3272 | 0.945 | 0.0972 | 0.6879 | 0.945 | 0.9432 | 0.0513 | 0.0132 |
| No log | 70.96 | 426 | 0.3210 | 0.945 | 0.0973 | 0.7545 | 0.945 | 0.9488 | 0.0522 | 0.0124 |
| No log | 71.96 | 432 | 0.3194 | 0.945 | 0.1027 | 0.7464 | 0.945 | 0.9514 | 0.0546 | 0.0108 |
| No log | 72.96 | 438 | 0.3236 | 0.94 | 0.1067 | 0.7486 | 0.94 | 0.9427 | 0.0587 | 0.0097 |
| No log | 73.96 | 444 | 0.3166 | 0.94 | 0.1049 | 0.6751 | 0.94 | 0.9427 | 0.0597 | 0.0096 |
| No log | 74.96 | 450 | 0.3062 | 0.945 | 0.0982 | 0.6702 | 0.945 | 0.9514 | 0.0526 | 0.0100 |
| No log | 75.96 | 456 | 0.3018 | 0.95 | 0.0948 | 0.6823 | 0.9500 | 0.9545 | 0.0523 | 0.0102 |
| No log | 76.96 | 462 | 0.3062 | 0.95 | 0.0951 | 0.7444 | 0.9500 | 0.9545 | 0.0522 | 0.0109 |
| No log | 77.96 | 468 | 0.3072 | 0.95 | 0.0933 | 0.7437 | 0.9500 | 0.9545 | 0.0501 | 0.0118 |
| No log | 78.96 | 474 | 0.3095 | 0.95 | 0.0943 | 0.6749 | 0.9500 | 0.9545 | 0.0512 | 0.0121 |
| No log | 79.96 | 480 | 0.3097 | 0.945 | 0.0968 | 0.6654 | 0.945 | 0.9514 | 0.0576 | 0.0116 |
| No log | 80.96 | 486 | 0.3094 | 0.95 | 0.0967 | 0.6581 | 0.9500 | 0.9545 | 0.0526 | 0.0112 |
| No log | 81.96 | 492 | 0.3109 | 0.95 | 0.0954 | 0.6549 | 0.9500 | 0.9545 | 0.0507 | 0.0115 |
| No log | 82.96 | 498 | 0.3104 | 0.95 | 0.0949 | 0.7168 | 0.9500 | 0.9545 | 0.0521 | 0.0113 |
| 0.3747 | 83.96 | 504 | 0.3122 | 0.95 | 0.0949 | 0.7130 | 0.9500 | 0.9545 | 0.0513 | 0.0111 |
| 0.3747 | 84.96 | 510 | 0.3140 | 0.95 | 0.0944 | 0.7116 | 0.9500 | 0.9545 | 0.0534 | 0.0113 |
| 0.3747 | 85.96 | 516 | 0.3175 | 0.95 | 0.0949 | 0.7100 | 0.9500 | 0.9545 | 0.0544 | 0.0113 |
| 0.3747 | 86.96 | 522 | 0.3187 | 0.95 | 0.0958 | 0.7072 | 0.9500 | 0.9545 | 0.0537 | 0.0111 |
| 0.3747 | 87.96 | 528 | 0.3191 | 0.95 | 0.0967 | 0.6428 | 0.9500 | 0.9545 | 0.0536 | 0.0103 |
| 0.3747 | 88.96 | 534 | 0.3168 | 0.95 | 0.0963 | 0.6438 | 0.9500 | 0.9545 | 0.0542 | 0.0102 |
| 0.3747 | 89.96 | 540 | 0.3136 | 0.95 | 0.0963 | 0.6418 | 0.9500 | 0.9545 | 0.0554 | 0.0099 |
| 0.3747 | 90.96 | 546 | 0.3117 | 0.95 | 0.0963 | 0.6407 | 0.9500 | 0.9545 | 0.0533 | 0.0097 |
| 0.3747 | 91.96 | 552 | 0.3113 | 0.95 | 0.0964 | 0.6403 | 0.9500 | 0.9545 | 0.0528 | 0.0091 |
| 0.3747 | 92.96 | 558 | 0.3112 | 0.95 | 0.0968 | 0.6401 | 0.9500 | 0.9545 | 0.0517 | 0.0091 |
| 0.3747 | 93.96 | 564 | 0.3109 | 0.95 | 0.0967 | 0.6393 | 0.9500 | 0.9545 | 0.0563 | 0.0091 |
| 0.3747 | 94.96 | 570 | 0.3112 | 0.95 | 0.0969 | 0.6370 | 0.9500 | 0.9545 | 0.0567 | 0.0092 |
| 0.3747 | 95.96 | 576 | 0.3118 | 0.95 | 0.0971 | 0.6364 | 0.9500 | 0.9545 | 0.0568 | 0.0091 |
| 0.3747 | 96.96 | 582 | 0.3120 | 0.95 | 0.0969 | 0.6377 | 0.9500 | 0.9545 | 0.0564 | 0.0092 |
| 0.3747 | 97.96 | 588 | 0.3121 | 0.95 | 0.0966 | 0.6379 | 0.9500 | 0.9545 | 0.0560 | 0.0092 |
| 0.3747 | 98.96 | 594 | 0.3121 | 0.95 | 0.0965 | 0.6374 | 0.9500 | 0.9545 | 0.0560 | 0.0092 |
| 0.3747 | 99.96 | 600 | 0.3120 | 0.95 | 0.0965 | 0.6372 | 0.9500 | 0.9545 | 0.0560 | 0.0092 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
biodatlab/aug-whisper-medium-th-base
|
biodatlab
| 2023-07-07T03:41:34Z | 146 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"th",
"dataset:mozilla-foundation/common_voice_13_0",
"dataset:google/fleurs",
"dataset:biodatlab/whisper-th-custom",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-07T02:05:37Z |
---
license: apache-2.0
datasets:
- mozilla-foundation/common_voice_13_0
- google/fleurs
- biodatlab/whisper-th-custom
language:
- th
metrics:
- wer
library_name: transformers
---
|
Twitter/twhin-bert-base
|
Twitter
| 2023-07-07T03:38:25Z | 3,304 | 35 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"Twitter",
"Multilingual",
"en",
"ja",
"pt",
"es",
"ko",
"ar",
"tr",
"th",
"fr",
"id",
"ru",
"de",
"fa",
"it",
"zh",
"pl",
"hi",
"ur",
"nl",
"el",
"ms",
"ca",
"sr",
"sv",
"uk",
"he",
"fi",
"cs",
"ta",
"ne",
"vi",
"hu",
"eo",
"bn",
"mr",
"ml",
"hr",
"no",
"sw",
"sl",
"te",
"az",
"da",
"ro",
"gl",
"gu",
"ps",
"mk",
"kn",
"bg",
"lv",
"eu",
"pa",
"et",
"mn",
"sq",
"si",
"sd",
"la",
"is",
"jv",
"lt",
"ku",
"am",
"bs",
"hy",
"or",
"sk",
"uz",
"cy",
"my",
"su",
"br",
"as",
"af",
"be",
"fy",
"kk",
"ga",
"lo",
"ka",
"km",
"sa",
"mg",
"so",
"ug",
"ky",
"gd",
"yi",
"arxiv:2209.07562",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-18T18:34:23Z |
---
language:
- en
- ja
- pt
- es
- ko
- ar
- tr
- th
- fr
- id
- ru
- de
- fa
- it
- zh
- pl
- hi
- ur
- nl
- el
- ms
- ca
- sr
- sv
- uk
- he
- fi
- cs
- ta
- ne
- vi
- hu
- eo
- bn
- mr
- ml
- hr
- no
- sw
- sl
- te
- az
- da
- ro
- gl
- gu
- ps
- mk
- kn
- bg
- lv
- eu
- pa
- et
- mn
- sq
- si
- sd
- la
- is
- jv
- lt
- ku
- am
- bs
- hy
- or
- sk
- uz
- cy
- my
- su
- br
- as
- af
- be
- fy
- kk
- ga
- lo
- ka
- km
- sa
- mg
- so
- ug
- ky
- gd
- yi
tags:
- Twitter
- Multilingual
license: "apache-2.0"
mask_token: "<mask>"
---
# TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations
[](http://makeapullrequest.com)
[](https://arxiv.org/abs/2209.07562)
This repo contains models, code and pointers to datasets from our paper: [TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations](https://arxiv.org/abs/2209.07562).
[[PDF]](https://arxiv.org/pdf/2209.07562.pdf)
[[HuggingFace Models]](https://huggingface.co/Twitter)
### Overview
TwHIN-BERT is a new multi-lingual Tweet language model that is trained on 7 billion Tweets from over 100 distinct languages. TwHIN-BERT differs from prior pre-trained language models as it is trained with not only text-based self-supervision (e.g., MLM), but also with a social objective based on the rich social engagements within a Twitter Heterogeneous Information Network (TwHIN).
TwHIN-BERT can be used as a drop-in replacement for BERT in a variety of NLP and recommendation tasks. It not only outperforms similar models semantic understanding tasks such text classification), but also **social recommendation** tasks such as predicting user to Tweet engagement.
## 1. Pretrained Models
We initially release two pretrained TwHIN-BERT models (base and large) that are compatible wit the [HuggingFace BERT models](https://github.com/huggingface/transformers).
| Model | Size | Download Link (🤗 HuggingFace) |
| ------------- | ------------- | --------- |
| TwHIN-BERT-base | 280M parameters | [Twitter/TwHIN-BERT-base](https://huggingface.co/Twitter/twhin-bert-base) |
| TwHIN-BERT-large | 550M parameters | [Twitter/TwHIN-BERT-large](https://huggingface.co/Twitter/twhin-bert-large) |
To use these models in 🤗 Transformers:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('Twitter/twhin-bert-base')
model = AutoModel.from_pretrained('Twitter/twhin-bert-base')
inputs = tokenizer("I'm using TwHIN-BERT! #TwHIN-BERT #NLP", return_tensors="pt")
outputs = model(**inputs)
```
<!-- ## 2. Set up environment and data
### Environment
TBD
## 3. Fine-tune TwHIN-BERT
TBD -->
## Citation
If you use TwHIN-BERT or out datasets in your work, please cite the following:
```bib
@article{zhang2022twhin,
title={TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations},
author={Zhang, Xinyang and Malkov, Yury and Florez, Omar and Park, Serim and McWilliams, Brian and Han, Jiawei and El-Kishky, Ahmed},
journal={arXiv preprint arXiv:2209.07562},
year={2022}
}
```
|
AustinCarthy/Benign10MGPT2_domain_100KP_BFall_fromP_90K_topP_0.75_ratio2.63
|
AustinCarthy
| 2023-07-07T03:33:18Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-07-07T01:19:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Benign10MGPT2_domain_100KP_BFall_fromP_90K_topP_0.75_ratio2.63
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Benign10MGPT2_domain_100KP_BFall_fromP_90K_topP_0.75_ratio2.63
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_Benign10MGPT2_using_phish_95K_top_p_0.75domain dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0248
- Accuracy: 0.9971
- F1: 0.9693
- Precision: 0.9939
- Recall: 0.9458
- Roc Auc Score: 0.9728
- Tpr At Fpr 0.01: 0.9312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0153 | 1.0 | 21554 | 0.0251 | 0.9950 | 0.9443 | 0.9980 | 0.896 | 0.9480 | 0.8982 |
| 0.0084 | 2.0 | 43108 | 0.0175 | 0.9970 | 0.9675 | 0.9914 | 0.9448 | 0.9722 | 0.9184 |
| 0.0041 | 3.0 | 64662 | 0.0135 | 0.9975 | 0.9737 | 0.9873 | 0.9606 | 0.9800 | 0.904 |
| 0.0013 | 4.0 | 86216 | 0.0210 | 0.9969 | 0.9668 | 0.9922 | 0.9426 | 0.9711 | 0.9174 |
| 0.0015 | 5.0 | 107770 | 0.0248 | 0.9971 | 0.9693 | 0.9939 | 0.9458 | 0.9728 | 0.9312 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
nesanchezo/model_prueba
|
nesanchezo
| 2023-07-07T03:28:51Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-06T20:13:39Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: model_prueba
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_prueba
This model is a fine-tuned version of [farleyknight-org-username/vit-base-mnist](https://huggingface.co/farleyknight-org-username/vit-base-mnist) on the handwriten-Numbers dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1889
- Accuracy: 0.9606
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Aeala/Enterredaas-33b-4bit
|
Aeala
| 2023-07-07T03:28:32Z | 9 | 4 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-07T00:18:50Z |
4-bit GPTQ quantization of [Enterredaas-33b](https://huggingface.co/Aeala/Enterredaas-33b-QLoRA)
**Important Note**: This was trained in the *Alpaca* format, so prompting should be something like:
```
### Instruction:
<system prompt> (without the <>, this works like telling the AI what it is/purpose. i.e. like ChatGPT API's system prompt)
### Input:
<prompt> (without the <>)
### Response:
```
|
PixelPerfect/PixelPerfect
|
PixelPerfect
| 2023-07-07T03:24:19Z | 31 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-23T18:38:48Z |
PixelPerfect Text-to-Image Model!
|
Roy029/mt5_extend_5000
|
Roy029
| 2023-07-07T03:16:08Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-03T11:29:57Z |
未学習
5000語Pythonを拡張
|
aroot/eng-mya-simcse_longestplus_usbbu
|
aroot
| 2023-07-07T03:09:49Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T02:48:19Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse_longestplus_usbbu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse_longestplus_usbbu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8896
- Bleu: 4.1199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-mya-simcse_longestplus_ssbbu
|
aroot
| 2023-07-07T03:08:50Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T02:47:56Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse_longestplus_ssbbu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse_longestplus_ssbbu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8853
- Bleu: 4.0560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-guj-simcse_longestplus_usbbu
|
aroot
| 2023-07-07T02:27:38Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T02:09:52Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_longestplus_usbbu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_longestplus_usbbu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2984
- Bleu: 2.6234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
lazygeek007/test
|
lazygeek007
| 2023-07-07T02:15:09Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-07-07T02:14:45Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
YakovElm/Jira_15_BERT_More_Properties
|
YakovElm
| 2023-07-07T02:00:29Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-07T01:59:49Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira_15_BERT_More_Properties
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira_15_BERT_More_Properties
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3339
- Train Accuracy: 0.8541
- Validation Loss: 0.4171
- Validation Accuracy: 0.8265
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4960 | 0.7933 | 0.7700 | 0.6120 | 0 |
| 0.3927 | 0.8206 | 0.4151 | 0.8707 | 1 |
| 0.3339 | 0.8541 | 0.4171 | 0.8265 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-fra-simcse_longest_ssbbu
|
aroot
| 2023-07-07T01:56:07Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T01:40:31Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_longest_ssbbu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_longest_ssbbu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1292
- Bleu: 32.3788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-fra-simcse_longest_usbbu
|
aroot
| 2023-07-07T01:56:04Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T01:40:36Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_longest_usbbu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_longest_usbbu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1308
- Bleu: 32.3213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-fra-simcse_longestplus_usbbu
|
aroot
| 2023-07-07T01:55:49Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T01:40:35Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_longestplus_usbbu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_longestplus_usbbu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1385
- Bleu: 32.2838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
YakovElm/Jira_10_BERT_More_Properties
|
YakovElm
| 2023-07-07T01:44:59Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-07T01:44:24Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira_10_BERT_More_Properties
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira_10_BERT_More_Properties
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3867
- Train Accuracy: 0.8248
- Validation Loss: 0.4420
- Validation Accuracy: 0.7760
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5406 | 0.7786 | 0.8600 | 0.4921 | 0 |
| 0.4617 | 0.7828 | 0.6068 | 0.4921 | 1 |
| 0.3867 | 0.8248 | 0.4420 | 0.7760 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
manosp/textual_inversion_moto_toy
|
manosp
| 2023-07-07T01:37:44Z | 6 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-06T22:25:35Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - manosp/textual_inversion_moto_toy
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.




|
Yangdf/moss_lora_guanaco_belle-merge_r-32_lr_1e-4_1w-steps
|
Yangdf
| 2023-07-07T01:32:01Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-07T01:30:31Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
zbaorgbui/q-FrozenLake-v1-4x4-noSlippery
|
zbaorgbui
| 2023-07-07T01:30:58Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T01:30:56Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="zbaorgbui/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ayresflesch/ppo-Huggy
|
ayresflesch
| 2023-07-07T01:23:40Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-07T01:23:34Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ayresflesch/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
liyingjian/ppo-Huggy
|
liyingjian
| 2023-07-07T01:20:06Z | 42 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-07T01:19:55Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: liyingjian/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
YakovElm/IntelDAOS_20_BERT_More_Properties
|
YakovElm
| 2023-07-07T01:13:37Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-07T01:13:01Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IntelDAOS_20_BERT_More_Properties
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IntelDAOS_20_BERT_More_Properties
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1683
- Train Accuracy: 0.9610
- Validation Loss: 0.3381
- Validation Accuracy: 0.9099
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2173 | 0.9420 | 0.3231 | 0.9099 | 0 |
| 0.1645 | 0.9610 | 0.3424 | 0.9099 | 1 |
| 0.1683 | 0.9610 | 0.3381 | 0.9099 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Karum/Baji_keisuke
|
Karum
| 2023-07-07T01:13:30Z | 0 | 0 |
nemo
|
[
"nemo",
"audio-to-audio",
"es",
"en",
"dataset:fka/awesome-chatgpt-prompts",
"dataset:mozilla-foundation/common_voice_11_0",
"license:creativeml-openrail-m",
"region:us"
] |
audio-to-audio
| 2023-07-07T01:11:06Z |
---
license: creativeml-openrail-m
datasets:
- fka/awesome-chatgpt-prompts
- mozilla-foundation/common_voice_11_0
language:
- es
- en
library_name: nemo
pipeline_tag: audio-to-audio
---
|
AustinCarthy/Benign10MGPT2_suffix_100KP_BFall_fromB_90K_topP_0.75_ratio5
|
AustinCarthy
| 2023-07-07T01:11:37Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-07-06T21:45:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Benign10MGPT2_suffix_100KP_BFall_fromB_90K_topP_0.75_ratio5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Benign10MGPT2_suffix_100KP_BFall_fromB_90K_topP_0.75_ratio5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_Benign10MGPT2_using_benign_95K_top_p_0.75suffix dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0623
- Accuracy: 0.9881
- F1: 0.8818
- Precision: 0.8347
- Recall: 0.9344
- Roc Auc Score: 0.9626
- Tpr At Fpr 0.01: 0.7796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0915 | 1.0 | 35625 | 0.0478 | 0.9880 | 0.8761 | 0.8605 | 0.8922 | 0.9425 | 0.7968 |
| 0.0797 | 2.0 | 71250 | 0.0386 | 0.9897 | 0.8959 | 0.8654 | 0.9286 | 0.9607 | 0.8378 |
| 0.0622 | 3.0 | 106875 | 0.0459 | 0.9876 | 0.8760 | 0.8335 | 0.923 | 0.9569 | 0.77 |
| 0.0398 | 4.0 | 142500 | 0.0544 | 0.9882 | 0.8828 | 0.8370 | 0.9338 | 0.9624 | 0.8568 |
| 0.0298 | 5.0 | 178125 | 0.0623 | 0.9881 | 0.8818 | 0.8347 | 0.9344 | 0.9626 | 0.7796 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
bhenrym14/airoboros-13b-gpt4-1.4.1-PI-8192-GPTQ
|
bhenrym14
| 2023-07-07T00:56:05Z | 7 | 4 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.4.1",
"arxiv:2306.15595",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-06T13:11:14Z |
---
datasets:
- jondurbin/airoboros-gpt4-1.4.1
---
# RoPE Scaled QLoRA Fine-tune of Llama-13b on airoboros-gpt4-1.4.1 (GPTQ)
LoRA Weights can be found here: https://huggingface.co/bhenrym14/airoboros-13b-gpt4-1.4.1-PI-8192-LoRA
fp16 weights can be found here: https://huggingface.co/bhenrym14/airoboros-13b-gpt4-1.4.1-PI-8192-fp16
## Overview
This is [Jon Durbin's Airoboros 13B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.4) (merged model with GPTQ Quantization) with several key modifications:
- Context length extended to 8192 by RoPE Scaled Embeddings, but NOT via the superHOT LoRA. I started with base Llama-13b.
- Training sequences beyond 2048 have the target truncated to equal 2048.
- Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4
- **This is a QLoRA fine-tune**. The original 13b model is a full fine-tune.
It was trained on 1x RTX 6000 Ada for ~17 hours.
## How to Use
The easiest way is to use [oobabooga text-generation-webui](https://github.com/oobabooga/text-generation-webui) with ExLlama. You'll need to set max_seq_len to 8192 and compress_pos_emb to 4.
If you wish to use AutoGPTQ/GPTQ-for-Llama instead, you'll need to patch in the appropriate RoPE scaling module. see: [replace_llama_rope_with_scaled_rope](https://github.com/bhenrym14/qlora-airoboros-longcontext/blob/main/scaledllama/llama_rope_scaled_monkey_patch.py)
## Motivation
Recent advancements in extending context by RoPE scaling ([kaiokendev](https://kaiokendev.github.io/til#extending-context-to-8k) and [meta AI)](https://arxiv.org/abs/2306.15595)) demonstrate the ability to extend the context window without (total) retraining. Finetuning has shown to be necessary to properly leverage the longer context. The superHOT LoRA is an adapter that has been fine-tuned on longer context (8192 tokens); even when applied to models trained on dissimilar datasets, it successfully extends the context window to which the model can attend. While it's impressive this adapter is so flexible, how much does performance suffer relative to a model that has been fine-tuned with the scaled embeddings from the start? This is an experiment to explore this.
## Relative Performance (perplexity)
| Model | Context (tokens) | Perplexity |
| ---------------------------------------------------- | ----------- | ---------- |
| TheBloke/airoboros-13B-gpt4-1-4-GPTQ | 512 | **7.42** |
| TheBloke/airoboros-13B-gpt4-1-4-SuperHOT-8K-GPTQ | 512 | 8.86 |
| **bhenrym14/airoboros-13b-gpt4-1.4.1-PI-8192-GPTQ** | 512 | 7.94 |
| ---------------------------------------------------- | ----------- | ---------- |
| TheBloke/airoboros-13B-gpt4-1-4-GPTQ | 2048 | **5.02** |
| TheBloke/airoboros-13B-gpt4-1-4-SuperHOT-8K-GPTQ | 2048 | 5.98 |
| **bhenrym14/airoboros-13b-gpt4-1.4.1-PI-8192-GPTQ** | 2048 | 5.28 |
| ---------------------------------------------------- | ----------- | ---------- |
| TheBloke/airoboros-13B-gpt4-1-4-GPTQ | 4096 | 9848.0 |
| TheBloke/airoboros-13B-gpt4-1-4-SuperHOT-8K-GPTQ | 4096 | 5.80 |
| **bhenrym14/airoboros-13b-gpt4-1.4.1-PI-8192-GPTQ** | 4096 | **5.15** |
- For contexts shorter than the original 2048, the original model has lower perplexity. This is consistent with the literature. The gap shrinks with context length, with the original becoming incoherent beyond this point.
- In terms of perplexity, this model outperforms the SuperHOT variant at all tested context lengths. I haven't used models with the SuperHOT LoRA enough to have any sense of performance differences, but feedback on the 33b variant suggests it is particularly noticable at longer context lengths.
- This comparison isn't perfect. I did use the 1.4.1 dataset, the quantization method is slightly different, and the finetuning method is different (QLoRA vs full). In short, there are other potentially influential variables responsible for these performance differences.
This model could be a little undertrained. I'll update the weights if I end up training it longer and/or with better hyperparameters
## Quantization:
The merged model was quantized with AutoGPTQ (bits = 4, group_size = 128, desc_act = True).
## Prompting:
See original model card below.
# Original model card: Jon Durbin's Airoboros 13B GPT4 1.4
## Overview
This is a __full__ (not qlora) fine-tune 13b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of the previous gpt-4 series, with a few extras:
* fixed (+ more examples of) multi-character, multi-turn conversations
* coding examples in 10 languages from rosettacode.org dataset thanks to Mike aka kryptkpr: https://huggingface.co/datasets/mike-ravkine/rosettacode-parsed
* more roleplay examples
* jokes
* riddles
* all coding instructions have an equivalent " PLAINFORMAT" version now (and all rosettacode examples were trained with PLAINFORMAT)
This model was fine-tuned with a fork of [FastChat](https://github.com/jondurbin/FastChat)
The prompt it was trained with was:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
## Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-13b-gpt4-1.4 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
For multi-turn conversations and chatting, you'll want to remove the `--no-history` option.
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described:
```
USER: BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
ASSISTANT:
```
### Coding
You can ask for fairly complex coding instructions, e.g. (be sure to wrap with vicuna style prompt):
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
### Word games / trivia
```
USER: This anagram of "silent" is what you do when paying attention and hearing someone.
ASSISTANT: listen
```
```
USER: Give me a list of 7 words that start with EN
ASSISTANT: 1. Endlessly
2. Enthusiastically
3. Enormously
4. Enveloping
5. Enduring
6. Enlightening
7. Encouraging
```
### Multiple choice
```
USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt?
A) Mercury
B) Venus
C) Pluto
D) Jupiter
ASSISTANT: C, Pluto
```
### Writing
<details>
<summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary>
Response:
```
ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home.
Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes.
No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice.
So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary>
```
</details>
### Jokes
```
USER: Tell me a joke about bananas.
ASSISTANT: What did the banana say when it got hit by a bus? R.I.P.E.
```
### Riddles (not great, but slightly better)
```
USER: What has a neck but no head?
ASSISTANT: A bottle.
```
### Multi-character conversations
```
USER: Characters:
- Yoda, the ever clever jedi master.
- Homer Simpson, you know, the goofy yellow guy.
- USER/Jon - don't generate any speech or actions on behalf of Jon
Rules:
- be sure to use the manerisms and speaking style of the characters correctly
- surround character actions with asterisks
- always put quotes around words spoken by the characters
- use a new line for each character's actions or speech
- always prefix actions/speech with the character's name and colon
- only generate text and actions on behalf of Yoda or Homer, and never anyone else
Conversation will revolve around the grapes, in a local cafe with delicious coffee.
Generate a single interaction between Yoda and Homer to start, with one sentence each.
ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer."
Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!"
USER: *enters the cafe* Sorry I'm late guys!
ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes."
Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!"
*Yoda raises an eyebrow*
```
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially.
|
jjhonny/Reinforce-PixelCopter
|
jjhonny
| 2023-07-07T00:41:43Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-02T00:06:48Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 20.50 +/- 13.12
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
YakovElm/IntelDAOS_5_BERT_More_Properties
|
YakovElm
| 2023-07-07T00:25:31Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-07T00:24:54Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IntelDAOS_5_BERT_More_Properties
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IntelDAOS_5_BERT_More_Properties
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3796
- Train Accuracy: 0.8740
- Validation Loss: 0.4317
- Validation Accuracy: 0.8438
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4019 | 0.8550 | 0.4347 | 0.8438 | 0 |
| 0.3829 | 0.8740 | 0.4341 | 0.8438 | 1 |
| 0.3796 | 0.8740 | 0.4317 | 0.8438 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Raizel123/TAYHlora
|
Raizel123
| 2023-07-07T00:18:23Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-07T00:16:15Z |
---
license: creativeml-openrail-m
---
|
Raizel123/Nesyalora
|
Raizel123
| 2023-07-07T00:07:49Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-07T00:02:13Z |
---
license: creativeml-openrail-m
---
|
jordyvl/vit-base_tobacco
|
jordyvl
| 2023-07-06T23:51:02Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-06T13:12:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_tobacco
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_tobacco
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7442
- Accuracy: 0.815
- Brier Loss: 0.3076
- Nll: 1.1877
- F1 Micro: 0.815
- F1 Macro: 0.7942
- Ece: 0.2072
- Aurc: 0.0734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 6 | 2.3082 | 0.085 | 0.9012 | 6.2672 | 0.085 | 0.0735 | 0.1625 | 0.9316 |
| No log | 1.96 | 12 | 2.2872 | 0.14 | 0.8970 | 4.8533 | 0.14 | 0.0885 | 0.1958 | 0.8912 |
| No log | 2.96 | 18 | 2.2562 | 0.225 | 0.8906 | 4.5559 | 0.225 | 0.1319 | 0.2527 | 0.8101 |
| No log | 3.96 | 24 | 2.2107 | 0.265 | 0.8808 | 4.3151 | 0.265 | 0.1614 | 0.2710 | 0.6990 |
| No log | 4.96 | 30 | 2.1433 | 0.3 | 0.8654 | 4.1825 | 0.3 | 0.1615 | 0.2943 | 0.6102 |
| No log | 5.96 | 36 | 2.0764 | 0.325 | 0.8493 | 3.6715 | 0.325 | 0.1696 | 0.3160 | 0.4502 |
| No log | 6.96 | 42 | 2.0012 | 0.375 | 0.8287 | 3.5534 | 0.375 | 0.1901 | 0.3542 | 0.3791 |
| No log | 7.96 | 48 | 1.9197 | 0.41 | 0.8041 | 3.3582 | 0.41 | 0.2136 | 0.3528 | 0.3342 |
| No log | 8.96 | 54 | 1.8379 | 0.45 | 0.7767 | 3.1997 | 0.45 | 0.2279 | 0.3709 | 0.2872 |
| No log | 9.96 | 60 | 1.7538 | 0.535 | 0.7475 | 2.9586 | 0.535 | 0.3755 | 0.4024 | 0.2508 |
| No log | 10.96 | 66 | 1.6634 | 0.57 | 0.7132 | 2.6969 | 0.57 | 0.4025 | 0.4182 | 0.2183 |
| No log | 11.96 | 72 | 1.5952 | 0.61 | 0.6842 | 2.4519 | 0.61 | 0.4427 | 0.4153 | 0.1882 |
| No log | 12.96 | 78 | 1.5205 | 0.655 | 0.6554 | 1.9703 | 0.655 | 0.5306 | 0.4572 | 0.1651 |
| No log | 13.96 | 84 | 1.4566 | 0.67 | 0.6308 | 1.7832 | 0.67 | 0.5458 | 0.4240 | 0.1514 |
| No log | 14.96 | 90 | 1.4009 | 0.685 | 0.6074 | 1.8217 | 0.685 | 0.5641 | 0.4221 | 0.1406 |
| No log | 15.96 | 96 | 1.3520 | 0.7 | 0.5866 | 1.6223 | 0.7 | 0.5896 | 0.4107 | 0.1304 |
| No log | 16.96 | 102 | 1.3220 | 0.7 | 0.5741 | 1.4452 | 0.7 | 0.5865 | 0.4029 | 0.1225 |
| No log | 17.96 | 108 | 1.2764 | 0.705 | 0.5522 | 1.4534 | 0.705 | 0.6076 | 0.3805 | 0.1269 |
| No log | 18.96 | 114 | 1.2448 | 0.72 | 0.5378 | 1.4843 | 0.72 | 0.6321 | 0.3724 | 0.1193 |
| No log | 19.96 | 120 | 1.2049 | 0.74 | 0.5210 | 1.2527 | 0.74 | 0.6471 | 0.3947 | 0.1039 |
| No log | 20.96 | 126 | 1.1712 | 0.74 | 0.5057 | 1.1657 | 0.74 | 0.6464 | 0.3833 | 0.0955 |
| No log | 21.96 | 132 | 1.1453 | 0.735 | 0.4936 | 1.0277 | 0.735 | 0.6597 | 0.3628 | 0.1015 |
| No log | 22.96 | 138 | 1.1094 | 0.745 | 0.4771 | 1.0003 | 0.745 | 0.6667 | 0.3841 | 0.0938 |
| No log | 23.96 | 144 | 1.0803 | 0.75 | 0.4628 | 1.0334 | 0.75 | 0.6972 | 0.3490 | 0.0891 |
| No log | 24.96 | 150 | 1.0658 | 0.755 | 0.4559 | 1.0092 | 0.755 | 0.6937 | 0.3536 | 0.0925 |
| No log | 25.96 | 156 | 1.0345 | 0.765 | 0.4423 | 0.9971 | 0.765 | 0.7356 | 0.3661 | 0.0852 |
| No log | 26.96 | 162 | 1.0133 | 0.76 | 0.4323 | 0.9206 | 0.76 | 0.7302 | 0.3343 | 0.0791 |
| No log | 27.96 | 168 | 0.9927 | 0.775 | 0.4225 | 0.9015 | 0.775 | 0.7433 | 0.3457 | 0.0794 |
| No log | 28.96 | 174 | 0.9789 | 0.765 | 0.4152 | 0.8946 | 0.765 | 0.7282 | 0.3337 | 0.0818 |
| No log | 29.96 | 180 | 0.9509 | 0.78 | 0.4025 | 0.9323 | 0.78 | 0.7565 | 0.3135 | 0.0733 |
| No log | 30.96 | 186 | 0.9388 | 0.79 | 0.3968 | 0.8616 | 0.79 | 0.7642 | 0.3353 | 0.0694 |
| No log | 31.96 | 192 | 0.9316 | 0.78 | 0.3927 | 0.8636 | 0.78 | 0.7588 | 0.3426 | 0.0739 |
| No log | 32.96 | 198 | 0.9197 | 0.79 | 0.3876 | 0.8581 | 0.79 | 0.7656 | 0.3042 | 0.0800 |
| No log | 33.96 | 204 | 0.9020 | 0.775 | 0.3792 | 0.8458 | 0.775 | 0.7543 | 0.2872 | 0.0744 |
| No log | 34.96 | 210 | 0.8833 | 0.785 | 0.3694 | 0.8288 | 0.785 | 0.7619 | 0.3305 | 0.0663 |
| No log | 35.96 | 216 | 0.8684 | 0.795 | 0.3624 | 0.8462 | 0.795 | 0.7779 | 0.3184 | 0.0690 |
| No log | 36.96 | 222 | 0.8608 | 0.79 | 0.3584 | 0.8860 | 0.79 | 0.7707 | 0.2790 | 0.0709 |
| No log | 37.96 | 228 | 0.8586 | 0.79 | 0.3587 | 0.8954 | 0.79 | 0.7724 | 0.3153 | 0.0754 |
| No log | 38.96 | 234 | 0.8470 | 0.79 | 0.3515 | 0.8822 | 0.79 | 0.7684 | 0.3075 | 0.0726 |
| No log | 39.96 | 240 | 0.8288 | 0.79 | 0.3434 | 0.8192 | 0.79 | 0.7700 | 0.2700 | 0.0648 |
| No log | 40.96 | 246 | 0.8255 | 0.8 | 0.3426 | 0.8191 | 0.8000 | 0.7808 | 0.2760 | 0.0727 |
| No log | 41.96 | 252 | 0.8247 | 0.8 | 0.3411 | 0.8876 | 0.8000 | 0.7737 | 0.2903 | 0.0701 |
| No log | 42.96 | 258 | 0.8196 | 0.8 | 0.3389 | 0.8841 | 0.8000 | 0.7786 | 0.2768 | 0.0727 |
| No log | 43.96 | 264 | 0.8118 | 0.805 | 0.3351 | 0.9510 | 0.805 | 0.7806 | 0.2620 | 0.0685 |
| No log | 44.96 | 270 | 0.8127 | 0.795 | 0.3352 | 1.0119 | 0.795 | 0.7705 | 0.2650 | 0.0707 |
| No log | 45.96 | 276 | 0.7968 | 0.8 | 0.3285 | 1.0041 | 0.8000 | 0.7788 | 0.2734 | 0.0665 |
| No log | 46.96 | 282 | 0.7946 | 0.81 | 0.3274 | 1.0647 | 0.81 | 0.7921 | 0.2765 | 0.0703 |
| No log | 47.96 | 288 | 0.7996 | 0.805 | 0.3298 | 1.0108 | 0.805 | 0.7867 | 0.2772 | 0.0714 |
| No log | 48.96 | 294 | 0.7971 | 0.805 | 0.3283 | 1.0728 | 0.805 | 0.7816 | 0.2756 | 0.0732 |
| No log | 49.96 | 300 | 0.7950 | 0.8 | 0.3278 | 1.0694 | 0.8000 | 0.7758 | 0.2540 | 0.0750 |
| No log | 50.96 | 306 | 0.7826 | 0.8 | 0.3222 | 1.0211 | 0.8000 | 0.7784 | 0.2596 | 0.0643 |
| No log | 51.96 | 312 | 0.7933 | 0.795 | 0.3273 | 1.0680 | 0.795 | 0.7712 | 0.2619 | 0.0764 |
| No log | 52.96 | 318 | 0.7883 | 0.805 | 0.3247 | 1.0730 | 0.805 | 0.7834 | 0.2426 | 0.0712 |
| No log | 53.96 | 324 | 0.7811 | 0.815 | 0.3219 | 1.0623 | 0.815 | 0.7913 | 0.2259 | 0.0716 |
| No log | 54.96 | 330 | 0.7784 | 0.815 | 0.3203 | 1.0657 | 0.815 | 0.7917 | 0.2797 | 0.0690 |
| No log | 55.96 | 336 | 0.7827 | 0.81 | 0.3219 | 1.0770 | 0.81 | 0.7885 | 0.2491 | 0.0752 |
| No log | 56.96 | 342 | 0.7701 | 0.815 | 0.3166 | 1.0614 | 0.815 | 0.7913 | 0.2664 | 0.0689 |
| No log | 57.96 | 348 | 0.7748 | 0.815 | 0.3187 | 1.0699 | 0.815 | 0.7913 | 0.2487 | 0.0722 |
| No log | 58.96 | 354 | 0.7669 | 0.815 | 0.3155 | 1.0607 | 0.815 | 0.7919 | 0.2482 | 0.0685 |
| No log | 59.96 | 360 | 0.7721 | 0.81 | 0.3180 | 1.0746 | 0.81 | 0.7859 | 0.2385 | 0.0730 |
| No log | 60.96 | 366 | 0.7645 | 0.815 | 0.3145 | 1.0650 | 0.815 | 0.7913 | 0.2468 | 0.0688 |
| No log | 61.96 | 372 | 0.7672 | 0.815 | 0.3157 | 1.0782 | 0.815 | 0.7913 | 0.2228 | 0.0728 |
| No log | 62.96 | 378 | 0.7625 | 0.82 | 0.3139 | 1.0673 | 0.82 | 0.8025 | 0.2323 | 0.0688 |
| No log | 63.96 | 384 | 0.7627 | 0.81 | 0.3144 | 1.1893 | 0.81 | 0.7892 | 0.2236 | 0.0710 |
| No log | 64.96 | 390 | 0.7629 | 0.815 | 0.3141 | 1.1934 | 0.815 | 0.7972 | 0.2277 | 0.0707 |
| No log | 65.96 | 396 | 0.7569 | 0.81 | 0.3118 | 1.1003 | 0.81 | 0.7866 | 0.2577 | 0.0696 |
| No log | 66.96 | 402 | 0.7619 | 0.815 | 0.3136 | 1.1365 | 0.815 | 0.7919 | 0.2562 | 0.0732 |
| No log | 67.96 | 408 | 0.7565 | 0.815 | 0.3114 | 1.1325 | 0.815 | 0.7919 | 0.2467 | 0.0694 |
| No log | 68.96 | 414 | 0.7558 | 0.815 | 0.3117 | 1.1895 | 0.815 | 0.7972 | 0.2453 | 0.0705 |
| No log | 69.96 | 420 | 0.7550 | 0.815 | 0.3111 | 1.1924 | 0.815 | 0.7972 | 0.2107 | 0.0709 |
| No log | 70.96 | 426 | 0.7573 | 0.805 | 0.3123 | 1.1886 | 0.805 | 0.7795 | 0.2476 | 0.0737 |
| No log | 71.96 | 432 | 0.7521 | 0.81 | 0.3099 | 1.1911 | 0.81 | 0.7866 | 0.2117 | 0.0698 |
| No log | 72.96 | 438 | 0.7542 | 0.81 | 0.3112 | 1.1878 | 0.81 | 0.7827 | 0.2332 | 0.0726 |
| No log | 73.96 | 444 | 0.7509 | 0.815 | 0.3096 | 1.1880 | 0.815 | 0.7899 | 0.2364 | 0.0709 |
| No log | 74.96 | 450 | 0.7526 | 0.81 | 0.3105 | 1.1889 | 0.81 | 0.7827 | 0.2453 | 0.0724 |
| No log | 75.96 | 456 | 0.7488 | 0.81 | 0.3090 | 1.1869 | 0.81 | 0.7827 | 0.2285 | 0.0699 |
| No log | 76.96 | 462 | 0.7506 | 0.815 | 0.3097 | 1.1901 | 0.815 | 0.7934 | 0.2547 | 0.0721 |
| No log | 77.96 | 468 | 0.7505 | 0.81 | 0.3098 | 1.1876 | 0.81 | 0.7827 | 0.2110 | 0.0724 |
| No log | 78.96 | 474 | 0.7487 | 0.815 | 0.3089 | 1.1885 | 0.815 | 0.7934 | 0.2319 | 0.0715 |
| No log | 79.96 | 480 | 0.7472 | 0.81 | 0.3083 | 1.1877 | 0.81 | 0.7827 | 0.2310 | 0.0714 |
| No log | 80.96 | 486 | 0.7494 | 0.81 | 0.3094 | 1.1877 | 0.81 | 0.7827 | 0.2462 | 0.0738 |
| No log | 81.96 | 492 | 0.7466 | 0.815 | 0.3082 | 1.1888 | 0.815 | 0.7922 | 0.2181 | 0.0709 |
| No log | 82.96 | 498 | 0.7467 | 0.81 | 0.3083 | 1.1874 | 0.81 | 0.7827 | 0.2454 | 0.0714 |
| 0.7129 | 83.96 | 504 | 0.7479 | 0.815 | 0.3088 | 1.1888 | 0.815 | 0.7922 | 0.2272 | 0.0741 |
| 0.7129 | 84.96 | 510 | 0.7456 | 0.81 | 0.3080 | 1.1853 | 0.81 | 0.7847 | 0.2358 | 0.0719 |
| 0.7129 | 85.96 | 516 | 0.7465 | 0.815 | 0.3082 | 1.1908 | 0.815 | 0.7922 | 0.2322 | 0.0721 |
| 0.7129 | 86.96 | 522 | 0.7454 | 0.805 | 0.3081 | 1.1848 | 0.805 | 0.7819 | 0.2262 | 0.0719 |
| 0.7129 | 87.96 | 528 | 0.7471 | 0.815 | 0.3086 | 1.1894 | 0.815 | 0.7922 | 0.2351 | 0.0741 |
| 0.7129 | 88.96 | 534 | 0.7459 | 0.815 | 0.3082 | 1.1885 | 0.815 | 0.7922 | 0.2159 | 0.0726 |
| 0.7129 | 89.96 | 540 | 0.7435 | 0.815 | 0.3072 | 1.1861 | 0.815 | 0.7922 | 0.2291 | 0.0712 |
| 0.7129 | 90.96 | 546 | 0.7454 | 0.81 | 0.3080 | 1.1876 | 0.81 | 0.7847 | 0.2180 | 0.0733 |
| 0.7129 | 91.96 | 552 | 0.7461 | 0.815 | 0.3083 | 1.1883 | 0.815 | 0.7942 | 0.2308 | 0.0743 |
| 0.7129 | 92.96 | 558 | 0.7451 | 0.815 | 0.3079 | 1.1883 | 0.815 | 0.7922 | 0.2330 | 0.0734 |
| 0.7129 | 93.96 | 564 | 0.7434 | 0.815 | 0.3073 | 1.1863 | 0.815 | 0.7942 | 0.2217 | 0.0720 |
| 0.7129 | 94.96 | 570 | 0.7446 | 0.815 | 0.3077 | 1.1882 | 0.815 | 0.7942 | 0.2400 | 0.0731 |
| 0.7129 | 95.96 | 576 | 0.7450 | 0.815 | 0.3079 | 1.1882 | 0.815 | 0.7942 | 0.2144 | 0.0735 |
| 0.7129 | 96.96 | 582 | 0.7440 | 0.815 | 0.3075 | 1.1871 | 0.815 | 0.7942 | 0.2348 | 0.0731 |
| 0.7129 | 97.96 | 588 | 0.7441 | 0.815 | 0.3076 | 1.1876 | 0.815 | 0.7942 | 0.2225 | 0.0732 |
| 0.7129 | 98.96 | 594 | 0.7442 | 0.815 | 0.3076 | 1.1877 | 0.815 | 0.7942 | 0.2072 | 0.0734 |
| 0.7129 | 99.96 | 600 | 0.7442 | 0.815 | 0.3076 | 1.1877 | 0.815 | 0.7942 | 0.2072 | 0.0734 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
garrettbaber/twitter-roberta-base-anger-intensity
|
garrettbaber
| 2023-07-06T23:29:21Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"text-regression",
"anger",
"emotion",
"emotion intensity",
"unk",
"dataset:SemEval-2018-Task-1-Text-Regression-Task",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-06T23:19:12Z |
---
tags:
- text-regression
- anger
- emotion
- emotion intensity
language:
- unk
widget:
- text: I am furious
datasets:
- SemEval-2018-Task-1-Text-Regression-Task
co2_eq_emissions:
emissions: 0.030118000944741423
---
# twitter-roberta-base-anger-intensity
This model is a fine-tuned version of cardiffnlp/twitter-roberta-base-2022-154m on the SemEval 2018 - Task 1 Affect in Tweets (subtask: El-reg / text regression).
Warning: Hosted inference API produces inaccurate values
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 72775139028
- CO2 Emissions (in grams): 0.0301
## Validation Metrics
- Loss: 0.011
- MSE: 0.011
- MAE: 0.085
- R2: 0.641
- RMSE: 0.103
- Explained Variance: 0.641
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I am furious"}' https://api-inference.huggingface.co/models/garrettbaber/twitter-roberta-base-anger-intensity
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("garrettbaber/twitter-roberta-base-anger-intensity")
tokenizer = AutoTokenizer.from_pretrained("garrettbaber/twitter-roberta-base-anger-intensity")
inputs = tokenizer("I am furious", return_tensors="pt")
outputs = model(**inputs)
```
|
AbduBot/dqn-SpaceInvadersNoFrameskip-v4
|
AbduBot
| 2023-07-06T23:27:22Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T23:26:45Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 653.50 +/- 202.04
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AbduBot -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AbduBot -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga AbduBot
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
zwpython/zw-chinese-vicuna-7B-v1.3
|
zwpython
| 2023-07-06T23:17:43Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-07-06T23:12:58Z |
全球首发,vicuna-7B-v1.3中文ok,母版是vicuna-7B-v1.3正式版。
更多参见:https://github.com/ziwang-com/chinese-StableVicuna 和:zw公众号
为响应国家AI大战略需求,提高国内AI、GPT初创团队的竞争力,不要输在起跑线上。
zw-vicuna系列zw中文汉化版,首度提供免费下载通道。 百度网盘提取码:hiks
链接:https://pan.baidu.com/s/1EH19ablXVLYQP1f-IaPS-Q?pwd=hiks
如有更改,最新下载地址请参见QQ群文件:655402626(GPT+千人QQ大群)
zw-vicuna中文汉化版,模型文件是ggml版格式
cpu+gpu版本,llamacpp运行,win,linux,mac-os通吃。
具体细节参见:https://github.com/ggerganov/llama.cpp
Prompt template提示词模板:
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input
USER: prompt
ASSISTANT:
更多细节和技术参数,参见:
官方原版:https://huggingface.co/lmsys/vicuna-7b-v1.3
Github项目: https://github.com/ziwang-com/chinese-StableVicuna
|
garrettbaber/twitter-roberta-base-sadness-intensity
|
garrettbaber
| 2023-07-06T23:08:47Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"text-regression",
"sadness",
"emotion",
"emotion intensity",
"unk",
"dataset:SemEval-2018-Task-1-Text-Regression-Task",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-06T23:04:50Z |
---
tags:
- text-regression
- sadness
- emotion
- emotion intensity
language:
- unk
widget:
- text: I'm feeling down
datasets:
- SemEval-2018-Task-1-Text-Regression-Task
co2_eq_emissions:
emissions: 0.025884770512937715
---
# twitter-roberta-base-sadness-intensity
This model is a fine-tuned version of cardiffnlp/twitter-roberta-base-2022-154m on the SemEval 2018 - Task 1 Affect in Tweets (subtask: El-reg / text regression).
Warning: Hosted inference API produces inaccurate values
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 72772139027
- CO2 Emissions (in grams): 0.0259
## Validation Metrics
- Loss: 0.011
- MSE: 0.011
- MAE: 0.079
- R2: 0.726
- RMSE: 0.103
- Explained Variance: 0.727
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I'm feeling down"}' https://api-inference.huggingface.co/models/garrettbaber/twitter-roberta-base-sadness-intensity
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("garrettbaber/twitter-roberta-base-sadness-intensity")
tokenizer = AutoTokenizer.from_pretrained("garrettbaber/twitter-roberta-base-sadness-intensity")
inputs = tokenizer("I'm feeling down", return_tensors="pt")
outputs = model(**inputs)
```
|
yongsun-yoon/minilmv2-bertscore-distilled
|
yongsun-yoon
| 2023-07-06T23:00:56Z | 140 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-06-30T22:04:37Z |
This is a distilled BERTScore model.
Please read [this post](https://medium.com/@yongsun.yoon/bertscore-knowledge-distillation-42721b3508e2) for details.
```python
from bert_score import BERTScorer
texts1 = ['This is a text.']
texts2 = ['This is another text.']
scorer = BERTScorer(model_type='yongsun-yoon/minilmv2-bertscore-distilled', num_layers=6)
P, R, F = scorer.score(texts1, texts2)
```
|
garrettbaber/twitter-roberta-base-joy-intensity
|
garrettbaber
| 2023-07-06T22:58:25Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"deberta",
"text-classification",
"text-regression",
"joy",
"emotion",
"emotion intensity",
"en",
"dataset:SemEval-2018-Task-1-Text-Regression-Task",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-06T22:47:30Z |
---
tags:
- text-regression
- joy
- emotion
- emotion intensity
language:
- en
widget:
- text: I am elated!
datasets:
- SemEval-2018-Task-1-Text-Regression-Task
co2_eq_emissions:
emissions: 0.03988347977318191
---
# twitter-roberta-base-joy-intensity
This model is a fine-tuned version of cardiffnlp/twitter-roberta-base-2022-154m on the SemEval 2018 - Task 1 Affect in Tweets (subtask: El-reg / text regression).
Warning: Hosted inference API produces inaccurate values
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 72771139026
- CO2 Emissions (in grams): 0.0399
## Validation Metrics
- Loss: 0.013
- MSE: 0.013
- MAE: 0.088
- R2: 0.707
- RMSE: 0.116
- Explained Variance: 0.709
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I am elated!"}' https://api-inference.huggingface.co/models/garrettbaber/twitter-roberta-base-joy-intensity
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("garrettbaber/twitter-roberta-base-joy-intensity")
tokenizer = AutoTokenizer.from_pretrained("garrettbaber/twitter-roberta-base-joy-intensity")
inputs = tokenizer("I am elated!", return_tensors="pt")
outputs = model(**inputs)
```
|
TheBloke/PMC_LLAMA-7B-10-Epoch-SuperHOT-8K-GGML
|
TheBloke
| 2023-07-06T22:51:43Z | 0 | 2 | null |
[
"license:other",
"region:us"
] | null | 2023-07-06T22:46:13Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Chaoyi Wu's PMC_LLAMA 7B 10 Epoch GGML
These files are GGML format model files for [Chaoyi Wu's PMC_LLAMA 7B 10 Epoch](https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B_10_epoch).
These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev).
In order to use the increased context length, you can presently use:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later.
Support is also expected to come to llama.cpp, however work is still being done to find the optimal implementation.
To use the increased context with KoboldCpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`.
**NOTE**: Increased context length is an area seeing rapid developments and improvements. It is quite possible that these models may be superseded by new developments in the coming days. If that's the case, I will remove them, or update this README as appropriate.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/PMC_LLAMA-7B-10-Epoch-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/PMC_LLAMA-7B-10-Epoch-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/PMC_LLAMA-7B-10-Epoch-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B_10_epoch)
<!-- compatibility_ggml start -->
## Compatibility
These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants.
However the increased context length won't work without specific support. See the note in the introduction for details on using increased context.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| pmc_llama-7b-10-epoch-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB | 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| pmc_llama-7b-10-epoch-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB | 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| pmc_llama-7b-10-epoch-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB | 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| pmc_llama-7b-10-epoch-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB | 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| pmc_llama-7b-10-epoch-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB | 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| pmc_llama-7b-10-epoch-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB | 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| pmc_llama-7b-10-epoch-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB | 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| pmc_llama-7b-10-epoch-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB | 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| pmc_llama-7b-10-epoch-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `koboldcpp`
On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096:
```
python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas --gpulayers 100 pmc_llama-7b-10-epoch-superhot-8k.ggmlv3.q4_K_M.bin
```
Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration.
For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: Chaoyi Wu's PMC_LLAMA 7B 10 Epoch
This repo contains the latest version of PMC_LLaMA_7B, which is LLaMA-7b finetuned on the PMC papers in the S2ORC dataset.
Notably, different from `chaoyi-wu/PMC_LLAMA_7B`, this model is further trained for 10 epochs.
The model was trained with the following hyperparameters:
* Epochs: **10**
* Batch size: 128
* Cutoff length: 512
* Learning rate: 2e-5
Each epoch we sample 512 tokens per paper for training.
The model can be loaded as follows:
```
import transformers
import torch
tokenizer = transformers.LlamaTokenizer.from_pretrained('chaoyi-wu/PMC_LLAMA_7B_10_epoch')
model = transformers.LlamaForCausalLM.from_pretrained('chaoyi-wu/PMC_LLAMA_7B_10_epoch')
sentence = 'Hello, doctor'
batch = tokenizer(
sentence,
return_tensors="pt",
add_special_tokens=False
)
with torch.no_grad():
generated = model.generate(inputs = batch["input_ids"], max_length=200, do_sample=True, top_k=50)
print('model predict: ',tokenizer.decode(generated[0]))
```
|
JoshELambert/groundslim
|
JoshELambert
| 2023-07-06T22:49:27Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-06T21:39:27Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# /var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmpqec8sju8/JoshELambert/groundslim
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("/var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmpqec8sju8/JoshELambert/groundslim")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
YakovElm/Hyperledger_10_BERT_More_Properties
|
YakovElm
| 2023-07-06T22:43:18Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-06T22:42:43Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger_10_BERT_More_Properties
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger_10_BERT_More_Properties
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3569
- Train Accuracy: 0.8838
- Validation Loss: 0.3936
- Validation Accuracy: 0.8600
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3701 | 0.8762 | 0.4061 | 0.8600 | 0 |
| 0.3614 | 0.8838 | 0.4039 | 0.8600 | 1 |
| 0.3569 | 0.8838 | 0.3936 | 0.8600 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
S1X3L4/ppo-Huggy
|
S1X3L4
| 2023-07-06T22:24:21Z | 12 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-06T22:24:16Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: S1X3L4/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
juancopi81/lmd-8bars-2048-epochs20_v3
|
juancopi81
| 2023-07-06T21:58:13Z | 136 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-05T23:21:51Z |
---
tags:
- generated_from_trainer
model-index:
- name: lmd-8bars-2048-epochs20_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmd-8bars-2048-epochs20_v3
This model is a fine-tuned version of [juancopi81/lmd-8bars-2048-epochs20_v2](https://huggingface.co/juancopi81/lmd-8bars-2048-epochs20_v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 4
- seed: 1
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0054 | 0.5 | 4994 | 0.9774 |
| 0.9784 | 1.0 | 9988 | 0.9563 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aroot/eng-mya-simcse_random_usrb
|
aroot
| 2023-07-06T21:56:18Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-06T21:39:51Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse_random_usrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse_random_usrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8937
- Bleu: 4.2688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
TheBloke/Tulu-7B-SuperHOT-8K-GGML
|
TheBloke
| 2023-07-06T21:49:56Z | 0 | 1 | null |
[
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2301.13688",
"arxiv:2304.07327",
"arxiv:2304.03277",
"license:other",
"region:us"
] | null | 2023-07-06T17:51:17Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Allen AI's Tulu 7B GGML
These files are GGML format model files for [Allen AI's Tulu 7B](https://huggingface.co/TheBloke/tulu-7B-fp16).
These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev).
In order to use the increased context length, you can presently use:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later.
Support is also expected to come to llama.cpp, however work is still being done to find the optimal implementation.
To use the increased context with KoboldCpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`.
**NOTE**: Increased context length is an area seeing rapid developments and improvements. It is quite possible that these models may be superseded by new developments in the coming days. If that's the case, I will remove them, or update this README as appropriate.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Tulu-7B-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Tulu-7B-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Tulu-7B-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/allenai/tulu-7b)
<!-- compatibility_ggml start -->
## Compatibility
These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants.
However the increased context length won't work without specific support. See the note in the introduction for details on using increased context.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| tulu-7b-superhot-8k.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB | 6.29 GB | Original llama.cpp quant method, 4-bit. |
| tulu-7b-superhot-8k.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB | 6.71 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| tulu-7b-superhot-8k.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB | 7.13 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| tulu-7b-superhot-8k.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB | 7.56 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| tulu-7b-superhot-8k.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB | 9.66 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `koboldcpp`
On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096:
```
python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas --gpulayers 100 tulu-7b-superhot-8k.ggmlv3.q4_K_M.bin
```
Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration.
For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: Allen AI's Tulu 7B
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Allen AI's Tulu 7B fp16
These files are pytorch format fp16 model files for [Allen AI's Tulu 7B](https://huggingface.co/allenai/tulu-7b).
It is the result of merging and/or converting the source repository to float16.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/tulu-7B-fp16)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/tulu-7B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/tulu-7B-fp16)
## Prompt template
The following template should be used:
```
<|user|>
prompt goes here
<|assistant|>
```
**Note**: There should be a newline after `<|assistant|>`. This appears to be very important for getting this model to respond correctly.
In other words, the prompt is:
```
<|user|>\nprompt goes here\n<|assistant|>\n
```
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Oscar Rangel, Eugene Pentland, Talal Aujan, Cory Kujawski, Luke, Asp the Wyvern, Ai Maven, Pyrater, Alps Aficionado, senxiiz, Willem Michiel, Junyu Yang, trip7s trip, Sebastain Graf, Joseph William Delisle, Lone Striker, Jonathan Leane, Johann-Peter Hartmann, David Flickinger, Spiking Neurons AB, Kevin Schuppel, Mano Prime, Dmitriy Samsonov, Sean Connelly, Nathan LeClaire, Alain Rossmann, Fen Risland, Derek Yates, Luke Pendergrass, Nikolai Manek, Khalefa Al-Ahmad, Artur Olbinski, John Detwiler, Ajan Kanaga, Imad Khwaja, Trenton Dambrowitz, Kalila, vamX, webtim, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Allen AI's Tulu 7B
# Tulu 7B
This model is a 7B LLaMa model finetuned on a mixture of instruction datasets (FLAN V2, CoT, Dolly, Open Assistant 1, GPT4-Alpaca, Code-Alpaca, and ShareGPT).
*Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner.
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 44.5 | 47.0 | 6.0 | 27.0 | 38.1 | 39.2 | 45.7 | 7.7 | 17.5 | 27.8 | 48.3 | 33.1 |
If you use this model, please cite our work, the llama paper, and the original datasets:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{dolly,
author = {Databricks},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {Blog post},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}
}
```
```
@article{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others},
journal={arXiv preprint arXiv:2301.13688},
year={2023}
}
```
```
@misc{köpf2023openassistant,
title={OpenAssistant Conversations -- Democratizing Large Language Model Alignment},
author={Andreas Köpf and Yannic Kilcher and Dimitri von Rütte and Sotiris Anagnostidis and Zhi-Rui Tam and Keith Stevens and Abdullah Barhoum and Nguyen Minh Duc and Oliver Stanley and Richárd Nagyfi and Shahul ES and Sameer Suri and David Glushkov and Arnav Dantuluri and Andrew Maguire and Christoph Schuhmann and Huu Nguyen and Alexander Mattick},
year={2023},
eprint={2304.07327},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@article{peng2023instruction,
title={Instruction Tuning with GPT-4},
author={Peng, Baolin and Li, Chunyuan and He, Pengcheng and Galley, Michel and Gao, Jianfeng},
journal={arXiv preprint arXiv:2304.03277},
year={2023}
}
```
```
@misc{codealpaca,
author = {Sahil Chaudhary},
title = {Code Alpaca: An Instruction-following LLaMA model for code generation},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/sahil280114/codealpaca}},
}
```
|
JoshELambert/weakgov
|
JoshELambert
| 2023-07-06T21:47:24Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-06T21:21:31Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# /var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmp794d5c0_/JoshELambert/weakgov
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("/var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmp794d5c0_/JoshELambert/weakgov")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
osiria/prime-6.6b
|
osiria
| 2023-07-06T21:40:15Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xglm",
"text-generation",
"it",
"arxiv:2005.14165",
"arxiv:2112.10668",
"license:mit",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2023-07-02T11:57:14Z |
---
license: mit
language:
- it
pipeline_tag: text-generation
---
--------------------------------------------------------------------------------------------------
<body>
<span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;"> Model: PRIME 6.6B 🔥</span>
<br>
<span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;"> Lang: IT</span>
<br>
<span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;"> </span>
<br>
<span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span>
</body>
--------------------------------------------------------------------------------------------------
<h3>Model description</h3>
This model is a <b>causal</b> language model for the <b>Italian</b> language, based on a GPT-like <b>[1]</b> architecture (more specifically, the model has been obtained by modifying Meta's XGLM architecture <b>[2]</b> and exploiting its 7.5B checkpoint).
The model has ~6.6B parameters and a vocabulary of 50.335 tokens. It is a foundation model, pre-trained for causal language modeling, so it is mainly suitable for basic natural language generation. It also has some zero-shot and few-shots inference capabilities, but you will have to fine-tune it in order to use it on more specific downstream tasks.
<h3>Quantization</h3>
The released checkpoint is quantized in 8-bit, so that it can easily be loaded and used for training and inference on ordinary hardware like consumer GPUs, and it requires the installation of the <b>transformers</b> library version >= 4.30.1 and the <b>bitsandbytes</b> library, version >= 0.37.2
On Windows operating systems, the <b>bitsandbytes-windows</b> module also needs to be installed on top. However, it appears that the module is not yet updated with some recent features, like the possibility to save the 8-bit quantized models.
In order to include this, you can install the fork in [this repo](https://github.com/francesco-russo-githubber/bitsandbytes-windows), using:
```bash
pip install git+https://github.com/francesco-russo-githubber/bitsandbytes-windows.git
```
<h3>Quick usage</h3>
In order to use the model for inference, the following pipeline is needed:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("osiria/prime-6.6b")
model = AutoModelForCausalLM.from_pretrained("osiria/prime-6.6b")
pipeline_nlg = pipeline("text-generation", model = model, tokenizer = tokenizer)
pipeline_nlg("Ciao, mi chiamo Marco Rossi e")
# [{'generated_text': 'Ciao, mi chiamo Marco Rossi e sono un fotografo professionista.'}]
```
<h3>Limitations</h3>
The model might behave erratically when presented with prompts which are too far away from its pre-training and, because of the probabilistic nature of its generation, it might occasionally produce biased or offensive content with respect to gender, race, ideologies, and political or religious beliefs.
These limitations imply that the model and its outputs should be used with caution, and should not be involved in situations that require the generated text to be fair or true.
<h3>References</h3>
[1] https://arxiv.org/abs/2005.14165
[2] https://arxiv.org/abs/2112.10668
<h3>License</h3>
The model is released under <b>MIT</b> license
|
nkpz/bayling-13b-v1.1-gptq-32g
|
nkpz
| 2023-07-06T21:40:09Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-06T21:11:59Z |
---
license: gpl-3.0
---
4-bit (32 groupsize) quantized files for [ICTNLP/bayling-13b-v1.1](https://huggingface.co/ICTNLP/bayling-13b-v1.1)
`BayLing (百聆, bǎi líng) is an instruction-following LLM equipped with advanced language alignment, showing superior capability in English/Chinese generation, instruction following and multi-turn interaction.`
Quantized using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
Command used to quantize: python llama.py /my/model/directory c4 --wbits 4 --true-sequential --act-order --groupsize 32 --save_safetensors /my/output/file.safetensors
|
TheBloke/Selfee-13B-SuperHOT-8K-GGML
|
TheBloke
| 2023-07-06T21:38:08Z | 0 | 1 | null |
[
"license:other",
"region:us"
] | null | 2023-07-06T17:16:57Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Kaist AI's Selfee 13B GGML
These files are GGML format model files for [Kaist AI's Selfee 13B](https://huggingface.co/TheBloke/selfee-13b-fp16).
These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev).
In order to use the increased context length, you can presently use:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later.
Support is also expected to come to llama.cpp, however work is still being done to find the optimal implementation.
To use the increased context with KoboldCpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`.
**NOTE**: Increased context length is an area seeing rapid developments and improvements. It is quite possible that these models may be superseded by new developments in the coming days. If that's the case, I will remove them, or update this README as appropriate.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Selfee-13B-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Selfee-13B-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Selfee-13B-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/kaist-ai/selfee-13b-delta)
<!-- compatibility_ggml start -->
## Compatibility
These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants.
However the increased context length won't work without specific support. See the note in the introduction for details on using increased context.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| selfee-13b-superhot-8k.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | Original llama.cpp quant method, 4-bit. |
| selfee-13b-superhot-8k.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| selfee-13b-superhot-8k.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| selfee-13b-superhot-8k.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| selfee-13b-superhot-8k.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `koboldcpp`
On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096:
```
python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas --gpulayers 100 selfee-13b-superhot-8k.ggmlv3.q4_K_M.bin
```
Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration.
For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: Kaist AI's Selfee 13B
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Kaist AI's Selfee 13B GGML
This repo contains fp16 pytorch format model files for [Kaist AI's Selfee 13B](https://huggingface.co/kaist-ai/selfee-13b-delta).
It is the result of merging the diff at the above repo with base Llama 13B, then converting fp32 to fp16.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GPTQ)
* [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Selfee-13B-fp16)
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaist AI's Selfee 13B
<p align="center" width="100%">
<a href="https://kaistai.github.io/SelFee/demo" target="_blank"><img src="https://raw.githubusercontent.com/kaistAI/SelFee/main/assets/llama_selfie.png" alt="KAIST-Selfee" style="width: 30%; min-width: 200px; display: block; margin: auto;"></a>
</p>
# SelFee: Iterative Self-Revising LLM Empowered by <br/> Self-Feedback Generation
[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)
[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE)
[](https://www.python.org/downloads/release/python-390/)
[](https://github.com/psf/black)
## News
[May 31, 2023] Initial release: We released the first version of SelFee! Check out the <a href="https://kaistai.github.io/SelFee/">blog post</a> for more details.
## Overview
This is the repository for the KAIST SelFee project, which aims to build and share an instruction-following LLaMA model. This repo mainly has five contents:
- The selection process of the 178K training data for SelFee ([detail](#data-release), [code](data_collection)).
- The generation process for the training data and its result. ([detail](#data-generation-process), [code](data_augmentation)).
- The training process for the model ([detail](#training), [code](train)).
- The inference process for the model ([detail](#inference), [code](inference)).
- The evaluation method and dataset ([detail](#evaluation), [code](evaluation)).
This repository is based on the [Stanford-Alpaca](https://github.com/tatsu-lab/stanford_alpaca/) and [Vicuna](https://github.com/lm-sys/FastChat/) repository. Thanks to all the contributors for these awesome repositories!! 🙌
**We highly recommend you read our [blog post](https://kaistai.github.io/SelFee/) for more details about the model.**
## Data Release
For data collection, we collected datasets from five different fields. These are the Stanford Alpaca dataset, math collection, code collection, Flan collection, and ShareGPT. We provide code that we used to make a dataset for training. We also provide code how we preprocessed ShareGPT. For ShareGPT, we only use the first (question, answer) pair from human and GPT, respectively. We only use instances which are classified as english,and filter instance which is not a form of question.
For other datsets, we do not need special data collection method.
## Data Generation Process
To train our model with high-quality instructions and answer pairs, we utilized data augmentation using OpenAI API calls. The process involved three steps. <br>
Firstly, we collected various instructions from multiple fields and fed them to ChatGPT to generate answers. <br>
Secondly, we gathered feedback on the generated answer by querying ChatGPT again and asked it to determine if the initial answer required any revision. <br>
Thirdly, if a revision was necessary, we passed the instruction, initial answer, and feedback pair to ChatGPT to generate a revised answer and its feedback pair.
We repeated the process until we received feedback that required no further revision or hit the maximum iteration. However, due to the token limitation of the ChatGPT API, we had to truncate some instances that needed more than 4096 tokens while augmenting.<br>
You can see the details with command [here](data_augmentation/README.md).<br>
*We provide the whole dataset after collection and augmentation using huggingface([code](data_collection/download_train.py)), so you can either use the code or follow our [data merging step](outputs/README.md) to replicate the training dataset. Feel free to use any of them!
## Training
We utilize <a href="https://github.com/lm-sys/FastChat">FastChat</a> to train the model. Given the instruction, we fine-tune the model to generate the answer and feedback chain (including the revisions).<br>
To reproduce the training procedure, here are the steps. <br>
```
pip install -r requirements.txt
```
```
torchrun --nproc_per_node=4 train/train_mem.py \
--model_name_or_path llama-7b \
--data_path outputs/feedback_gpt_3.5_turbo_merged_whole.json \
--bf16 True \
--output_dir ckpt/selfee-7b \
--num_train_epochs 3 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--gradient_accumulation_steps 2 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 5000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--fsdp "shard_grad_op auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
--tf32 True \
--model_max_length 2048 \
--gradient_checkpointing True \
--lazy_preprocess True \
--training_objective full \
```
The hyperparameters are as follows, following Vicuna and Alpaca.
| Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
| --- | ---: | ---: | ---: | ---: | ---: |
| SelFee (7B, 13B) | 128 | 2e-5 | 3 | 2048 | 0 |
## Inference
<b>Restoring checkpoint using diff</b><br>
We provide diff weight and code which can restore the same model with SelFee. To restore the original SelFee weight, you first need to convert the Meta's original LLAMA checkpoint into huggingface format into your local machine. Once you are done, you can restore the same checkpoint of our model by using the following command
```
python inference/apply_delta.py --path_raw {path_to_llama_7b} --path_tuned /ckpt/selfee-7b --path_diff kaist-ai/selfee-7b-delta
```
<b>Autonomous Inference Mode</b><br>
Because SelFee is trained to generate iterative feedback and revisions until the response is satisfying, it automatically generates iterative feedback and revisions on a single forward pass. The model autonomously decides when to stop generating revisions based on the feedback. If the feedback chain ends with sequences like `Revision is not needed.`, the model autonomously terminates generation. <br>
For autonomous inference mode,
```
python inference/inference.py --model-path "ckpt/selfee-7b" --model-id "selfee" --question-file "evaluation/template/question.jsonl" --answer-file "evaluation/answer/selfee_7b_autonomous.jsonl"
```
<b>Revision Enforce Inference Mode</b><br>
We observed that increasing the minimum number of required revisions corresponds to a corresponding increase in performance. To enforce revisions, we automatically replace sequences such as `Revision is not needed.` into `Revision is needed.` during self-feedback generation. Because SelFee is trained to generate `Revision {index}:` after the sequence of `Revision is needed.`, the model would continually revise the answer.
For revision enforce inference mode, use the `max-num-revision` argument.
```
python inference/inference.py --model-path "ckpt/selfee-7b" --model-id "selfee" --question-file "evaluation/template/question.jsonl" --answer-file "evaluation/answer/selfee_7b_enforce_3_revision.jsonl" --max-num-revision 3
```
## Evaluation
Following evaluation setting of Vicuna, we evaluate on 80 diverse queries and utilize GPT-4 language model as the evaluator, scoring a model's response relative to ChatGPT's response. One of the difference with Vicuna evaluation is that due to positional bias of GPT-4, we employ a bidirectional evaluation setting. This means that each evaluation instance is inferred twice, depending on its position.<br>
We release the inference result of SelFee in the folder of `evaluation/answer` and also the scores generated by GPT-4 in the folder of `evaluation/review`. <br>
### GPT-4 Automatic Evaluation
First, you need to get your API key to get access to the GPT-4 API.
```
export OPENAI_API_KEYS={personal_key}
```
To compare the performance of a generation result (for example, located on `evaluation/answer/file_A.jsonl`) with another generation result (located on `evaluation/anwer/file_B.jsonl`),
```
python evaluation/gpt4_automatic_evaluation.py -q evaluation/template/question.jsonl -a evaluation/answer/file_A.jsonl evaluation/answer/file_B.jsonl -p evaluation/template/prompt.jsonl -r evaluation/template/reviewer.jsonl -o evaluation/review/A_vs_B.jsonl
```
To mitigate the positional bias of GPT-4 model, we apply a bidirectional evaluation setting. Therefore, automatic evaluation with opposite position is also needed.
```
python evaluation/gpt4_automatic_evaluation.py -q evaluation/template/question.jsonl -a evaluation/answer/file_B.jsonl evaluation/answer/file_A.jsonl -p evaluation/template/prompt.jsonl -r evaluation/template/reviewer.jsonl -o evaluation/review/B_vs_A.jsonl
```
## Limitations
Similar to other LLaMA-finetuned models, SelFee also make some mistakes especially for math, reasoning, factuality, and coding tasks. Although our performance outperforms ChatGPT on Vicuna setting, the evaluation setting contains some limitations in terms of comprehension (limited to 80 queries), inconsistency, and unreliability. Therefore, further research for a better evaluation setting is needed. Please take these claims with a grain of salt.
## Online demo
Check out the <a href="https://kaistai.github.io/SelFee/demo">demo</a>!
#### How to launch the demo yourself
To serve the web demo yourself, run the following commands:
1. Run the controller
```
python3 -m serve.controller
```
2. Run the model worker
```
python3 -m serve.model_worker --model-path $MODEL_PATH --port 21002 --worker-address=http://localhost:21002 --model-name=SelFee-13b
```
3. Run the web server
```
python3 -m serve.gradio_web_server --share
```
You can find the serving code [here](serve).
### Team members
<a href="https://seonghyeonye.github.io/)">Seonghyeon Ye*</a>, <a href="https://github.com/dreamgonfly">Yongrae Jo*</a>, <a href="https://github.com/doeyoungkim">Doyoung Kim*</a>, <a href="https://scholar.google.com/citations?user=xKrSnDoAAAAJ&hl">Sungdong Kim</a>, <a href="https://github.com/hbin0701">Hyeonbin Hwang</a>, and <a href="https://seominjoon.github.io/">Minjoon Seo</a>. <br/>
(* denotes equal contribution)
### Release
We have released the SelFee-7B and SelFee-13B model diff weights, which can be found with instructions here. Moreover, the training instances used to train SelFee is released on huggingface.
### License
The research preview online demo is only for non-commercial use and is subject to various licenses and terms of use, including the LLaMA model <a href="https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md">License</a>, OpenAI's <a href="https://openai.com/policies/terms-of-use">Terms of Use</a> for the generated data, and ShareGPT's <a href="https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb">Privacy Practices</a>. If you suspect any violations, please reach out to us.
### Citation
Please cite if you use the data or code in this repo.
```
@misc{selfee2023,
author = {Ye, Seonghyeon and Jo, Yongrae and Kim, Doyoung and Kim, Sungdong and Hwang, Hyeonbin and Seo, Minjoon},
title = {SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation},
url = {https://kaistai.github.io/SelFee/},
month = {May},
year = {2023},
howpublished = {Blog post}
}
```
|
HeshamMamdouh/mt5-small-v2-sum-fine-tuned
|
HeshamMamdouh
| 2023-07-06T21:25:34Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-06T21:22:53Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: mt5-small-v2-sum-fine-tuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mt5-small-v2-sum-fine-tuned
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.7918
- Validation Loss: 9.1352
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 14.5286 | 12.5993 | 0 |
| 13.7167 | 12.7206 | 1 |
| 13.7518 | 12.4721 | 2 |
| 13.5991 | 12.0654 | 3 |
| 13.0693 | 11.5237 | 4 |
| 12.8718 | 11.5755 | 5 |
| 12.6745 | 11.3361 | 6 |
| 12.4659 | 10.6694 | 7 |
| 12.2692 | 10.0483 | 8 |
| 12.2115 | 10.5089 | 9 |
| 11.9810 | 10.3895 | 10 |
| 11.6432 | 10.1090 | 11 |
| 11.6436 | 9.4868 | 12 |
| 11.3711 | 9.9035 | 13 |
| 11.1223 | 8.9180 | 14 |
| 10.9886 | 9.3682 | 15 |
| 10.8426 | 8.9964 | 16 |
| 10.5593 | 9.2168 | 17 |
| 10.5568 | 8.9877 | 18 |
| 10.1875 | 8.8072 | 19 |
| 10.1814 | 10.3268 | 20 |
| 10.0053 | 11.1192 | 21 |
| 9.6850 | 10.9950 | 22 |
| 9.6080 | 10.7909 | 23 |
| 9.4208 | 10.9226 | 24 |
| 9.3501 | 10.1040 | 25 |
| 9.2757 | 10.1148 | 26 |
| 9.1751 | 9.9607 | 27 |
| 8.9227 | 9.1899 | 28 |
| 8.7918 | 9.1352 | 29 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.11.0
- Datasets 2.13.1
- Tokenizers 0.12.1
|
aroot/eng-guj-simcse_random_usrb
|
aroot
| 2023-07-06T21:19:25Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-06T20:57:54Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_random_usrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_random_usrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2995
- Bleu: 2.6979
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-mya-simcse_random_ssrb
|
aroot
| 2023-07-06T21:15:22Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-06T20:54:29Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse_random_ssrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse_random_ssrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8934
- Bleu: 4.1639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-mya-simcse_central_usblu
|
aroot
| 2023-07-06T21:12:43Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-06T20:52:09Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse_central_usblu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse_central_usblu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8855
- Bleu: 4.1385
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
JoshELambert/fishpop
|
JoshELambert
| 2023-07-06T20:51:49Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-06T20:00:19Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# /var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmpti_nwtb1/JoshELambert/fishpop
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("/var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmpti_nwtb1/JoshELambert/fishpop")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
BubbleJoe/swin-tiny-patch4-window7-224-finetuned-eurosat
|
BubbleJoe
| 2023-07-06T20:47:28Z | 213 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-06T20:30:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9748148148148148
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0742
- Accuracy: 0.9748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2967 | 1.0 | 190 | 0.1191 | 0.9622 |
| 0.1776 | 2.0 | 380 | 0.0897 | 0.9719 |
| 0.1334 | 3.0 | 570 | 0.0742 | 0.9748 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
espnet/simpleoier_librispeech_hubert_iter1_train_ssl_torchaudiohubert_base_960h_pretrain_it1_raw
|
espnet
| 2023-07-06T20:46:31Z | 1 | 0 |
espnet
|
[
"espnet",
"audio",
"self-supervised-learning",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null | 2023-01-04T13:59:15Z |
---
tags:
- espnet
- audio
- self-supervised-learning
language: en
datasets:
- librispeech
license: cc-by-4.0
---
## ESPnet2 SSL model
### `simpleoier/simpleoier_librispeech_hubert_iter1_train_ssl_torchaudiohubert_base_960h_pretrain_it1_raw`
This model was trained by simpleoier using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 753f40d61813436d4e76660904d02eaed7a6649e
pip install -e .
cd egs2/librispeech/ssl1
./run.sh --skip_data_prep false --skip_train true --download_model simpleoier/simpleoier_librispeech_hubert_iter1_train_ssl_torchaudiohubert_base_960h_pretrain_it1_raw
```
## SSL config
<details><summary>expand</summary>
```
config: conf/tuning/train_ssl_torchaudiohubert_base_960h_pretrain_it1.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/hubert_iter1_train_ssl_torchaudiohubert_base_960h_pretrain_it1_raw
ngpu: 1
seed: 0
num_workers: 64
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 8
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 49251
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 250
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 45000000
valid_batch_bins: null
train_shape_file:
- exp/hubert_iter1_stats_raw/train/speech_shape
- exp/hubert_iter1_stats_raw/train/text_shape.word
valid_shape_file:
- exp/hubert_iter1_stats_raw/valid/speech_shape
- exp/hubert_iter1_stats_raw/valid/text_shape.word
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 400
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_960/wav.scp
- speech
- sound
- - dump/raw/train_960/text.km.kmeans_iter1_hubert_train_960_portion0.1
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/text.km.kmeans_iter1_hubert_train_960_portion0.1
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0005
scheduler: warmuplr
scheduler_conf:
warmup_steps: 32000
token_list:
- '386'
- '160'
- '89'
- '3'
- '448'
- '431'
- '319'
- '247'
- '256'
- '23'
- '267'
- '274'
- '479'
- '227'
- '197'
- '74'
- '362'
- '159'
- '190'
- '275'
- '241'
- '147'
- '242'
- '105'
- '7'
- '320'
- '311'
- '327'
- '130'
- '485'
- '427'
- '22'
- '493'
- '254'
- '451'
- '399'
- '342'
- '443'
- '38'
- '33'
- '53'
- '238'
- '86'
- '61'
- '263'
- '218'
- '316'
- '350'
- '96'
- '492'
- '341'
- '496'
- '325'
- '462'
- '24'
- '328'
- '133'
- '407'
- '41'
- '304'
- '373'
- '167'
- '352'
- '456'
- '149'
- '279'
- '84'
- '217'
- '494'
- '139'
- '381'
- '416'
- '305'
- '446'
- '337'
- '228'
- '35'
- '372'
- '55'
- '237'
- '66'
- '13'
- '188'
- '291'
- '43'
- '132'
- '232'
- '144'
- '497'
- '318'
- '0'
- '31'
- '49'
- '400'
- '10'
- '406'
- '398'
- '154'
- '300'
- '226'
- '93'
- '348'
- '82'
- '2'
- '423'
- '113'
- '395'
- '92'
- '394'
- '293'
- '62'
- '137'
- '476'
- '216'
- '432'
- '155'
- '29'
- '369'
- '64'
- '163'
- '389'
- '278'
- '25'
- '164'
- '310'
- '213'
- '126'
- '331'
- '414'
- '11'
- '404'
- '185'
- '365'
- '484'
- '409'
- '17'
- '193'
- '178'
- '273'
- '37'
- '390'
- '128'
- '170'
- '203'
- '298'
- '229'
- '383'
- '67'
- '27'
- '118'
- '72'
- '142'
- '73'
- '65'
- '231'
- '104'
- '124'
- '428'
- '345'
- '230'
- '287'
- '175'
- '294'
- '184'
- '97'
- '48'
- '457'
- '288'
- '204'
- '379'
- '107'
- '200'
- '99'
- '269'
- '442'
- '353'
- '129'
- '445'
- '51'
- '360'
- '80'
- '83'
- '201'
- '223'
- '312'
- '69'
- '30'
- '202'
- '70'
- '286'
- '236'
- '50'
- '123'
- '88'
- '205'
- '151'
- '127'
- '186'
- '367'
- '299'
- '313'
- '220'
- '206'
- '297'
- '422'
- '71'
- '44'
- '281'
- '91'
- '57'
- '408'
- '112'
- '26'
- '145'
- '16'
- '75'
- '235'
- '183'
- '222'
- '171'
- '121'
- '250'
- '472'
- '195'
- '94'
- '357'
- '393'
- '380'
- '370'
- '363'
- '103'
- '396'
- '468'
- '346'
- '40'
- '180'
- '42'
- '351'
- '450'
- '477'
- '239'
- '143'
- '361'
- '314'
- '392'
- '161'
- '473'
- '198'
- '194'
- '371'
- '433'
- '56'
- '444'
- '138'
- '157'
- '245'
- '140'
- '165'
- '412'
- '354'
- '9'
- '333'
- '85'
- '176'
- '323'
- '301'
- '215'
- '264'
- '434'
- '489'
- '355'
- '488'
- '382'
- '177'
- '268'
- '290'
- '114'
- '266'
- '334'
- '356'
- '90'
- '244'
- '259'
- '368'
- '6'
- '303'
- '478'
- '199'
- '376'
- '480'
- '401'
- '1'
- '168'
- '453'
- '19'
- '54'
- '221'
- '100'
- '4'
- '495'
- '77'
- '240'
- '45'
- '481'
- '224'
- '20'
- '120'
- '58'
- '162'
- '12'
- '109'
- '491'
- '115'
- '397'
- '340'
- '196'
- '68'
- '34'
- '415'
- '429'
- '421'
- '475'
- '335'
- '338'
- '172'
- '39'
- '258'
- '330'
- '246'
- '425'
- '296'
- '125'
- '60'
- '52'
- '271'
- '173'
- '469'
- '289'
- '439'
- '207'
- '487'
- '272'
- '332'
- '284'
- '308'
- '388'
- '95'
- '248'
- '101'
- '36'
- '14'
- '315'
- '262'
- '146'
- '343'
- '79'
- '426'
- '21'
- '253'
- '63'
- '292'
- '81'
- '385'
- '309'
- '366'
- '116'
- '131'
- '87'
- '449'
- '283'
- '214'
- '474'
- '329'
- '471'
- '225'
- '108'
- '136'
- '148'
- '306'
- '150'
- '378'
- '460'
- '307'
- '141'
- '98'
- '436'
- '402'
- '192'
- '8'
- '483'
- '440'
- '47'
- '466'
- '486'
- '5'
- '257'
- '447'
- '377'
- '111'
- '251'
- '490'
- '265'
- '438'
- '158'
- '384'
- '135'
- '102'
- '276'
- '211'
- '219'
- '187'
- '347'
- '32'
- '182'
- '169'
- '410'
- '455'
- '461'
- '482'
- '374'
- '463'
- '452'
- '59'
- '152'
- '174'
- '418'
- '166'
- '470'
- '459'
- '153'
- '179'
- '498'
- '430'
- '419'
- '467'
- '208'
- '326'
- '210'
- '270'
- '243'
- '255'
- '233'
- '261'
- '336'
- '282'
- '234'
- '464'
- '181'
- '156'
- '359'
- '454'
- '420'
- '28'
- '249'
- '106'
- '302'
- '191'
- '209'
- '46'
- '117'
- '403'
- '280'
- '324'
- '458'
- '134'
- '122'
- '212'
- '18'
- '437'
- '78'
- '375'
- '252'
- '405'
- '295'
- '435'
- '317'
- '260'
- '364'
- '322'
- '15'
- '339'
- '413'
- '465'
- '285'
- '189'
- '417'
- '344'
- '110'
- '119'
- '277'
- '499'
- '358'
- '411'
- '387'
- '349'
- '424'
- '391'
- '76'
- '441'
- '321'
- <unk>
- <sos/eos>
init: null
collate_fn_conf:
label_downsampling: 1
pad: false
rand_crop: true
input_size: 1
num_classes: 500
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
pred_masked_weight: 1.0
pred_nomask_weight: 0.0
loss_weights: 0.0
frontend: null
frontend_conf: {}
specaug: null
specaug_conf: {}
normalize: null
normalize_conf: {}
preencoder: null
preencoder_conf: {}
encoder: torchaudio_hubert
encoder_conf:
encoder_projection_dropout: 0.1
encoder_attention_dropout: 0.1
encoder_ff_interm_dropout: 0.0
encoder_dropout: 0.1
encoder_layer_drop: 0.05
model: torchaudio
model_conf: {}
required:
- output_dir
- token_list
version: '202209'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
espnet/simpleoier_librispeech_hubert_iter0_train_ssl_torchaudiohubert_base_960h_pretrain_it0_raw
|
espnet
| 2023-07-06T20:38:53Z | 1 | 0 |
espnet
|
[
"espnet",
"audio",
"self-supervised-learning",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null | 2022-12-31T03:54:13Z |
---
tags:
- espnet
- audio
- self-supervised-learning
language: en
datasets:
- librispeech
license: cc-by-4.0
---
## ESPnet2 SSL model
### `simpleoier/simpleoier_librispeech_hubert_iter0_train_ssl_torchaudiohubert_base_960h_pretrain_it0_raw`
This model was trained by simpleoier using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 753f40d61813436d4e76660904d02eaed7a6649e
pip install -e .
cd egs2/librispeech/ssl1
./run.sh --skip_data_prep false --skip_train true --download_model simpleoier/simpleoier_librispeech_hubert_iter0_train_ssl_torchaudiohubert_base_960h_pretrain_it0_raw
```
## SSL config
<details><summary>expand</summary>
```
config: conf/tuning/train_ssl_torchaudiohubert_base_960h_pretrain_it0.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/hubert_iter0_train_ssl_torchaudiohubert_base_960h_pretrain_it0_raw
ngpu: 1
seed: 0
num_workers: 64
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 8
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 45091
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 250
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 48000000
valid_batch_bins: null
train_shape_file:
- exp/hubert_iter0_stats_raw/train/speech_shape
- exp/hubert_iter0_stats_raw/train/text_shape.word
valid_shape_file:
- exp/hubert_iter0_stats_raw/valid/speech_shape
- exp/hubert_iter0_stats_raw/valid/text_shape.word
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 400
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_960/wav.scp
- speech
- sound
- - dump/raw/train_960/text.km.kmeans_iter0_mfcc_train_960_portion0.1
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/text.km.kmeans_iter0_mfcc_train_960_portion0.1
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0005
scheduler: warmuplr
scheduler_conf:
warmup_steps: 32000
token_list:
- '81'
- '5'
- '79'
- '84'
- '27'
- '35'
- '67'
- '56'
- '10'
- '99'
- '24'
- '3'
- '48'
- '8'
- '42'
- '16'
- '32'
- '31'
- '47'
- '43'
- '20'
- '73'
- '49'
- '86'
- '18'
- '64'
- '34'
- '59'
- '95'
- '0'
- '52'
- '44'
- '61'
- '57'
- '30'
- '1'
- '93'
- '6'
- '69'
- '19'
- '7'
- '65'
- '28'
- '89'
- '2'
- '96'
- '91'
- '72'
- '38'
- '78'
- '26'
- '13'
- '39'
- '94'
- '4'
- '88'
- '85'
- '51'
- '82'
- '41'
- '50'
- '21'
- '80'
- '97'
- '87'
- '25'
- '54'
- '12'
- '40'
- '60'
- '29'
- '11'
- '53'
- '71'
- '83'
- '74'
- '68'
- '55'
- '62'
- '76'
- '45'
- '75'
- '92'
- '46'
- '36'
- '66'
- '22'
- '77'
- '23'
- '63'
- '37'
- '58'
- '33'
- '15'
- '17'
- '90'
- '98'
- '14'
- '70'
- '9'
- <unk>
- <sos/eos>
init: null
collate_fn_conf:
label_downsampling: 2
pad: false
rand_crop: true
input_size: 1
num_classes: 100
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
pred_masked_weight: 1.0
pred_nomask_weight: 0.0
loss_weights: 0.0
frontend: null
frontend_conf: {}
specaug: null
specaug_conf: {}
normalize: null
normalize_conf: {}
preencoder: null
preencoder_conf: {}
encoder: torchaudio_hubert
encoder_conf:
encoder_projection_dropout: 0.1
encoder_attention_dropout: 0.1
encoder_ff_interm_dropout: 0.0
encoder_dropout: 0.1
encoder_layer_drop: 0.05
model: torchaudio
model_conf: {}
required:
- output_dir
- token_list
version: '202209'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
WALIDALI/oumadvenly
|
WALIDALI
| 2023-07-06T20:38:46Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-06T20:33:28Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### oumadvenly Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
TheBloke/WizardLM-7B-V1-0-Uncensored-SuperHOT-8K-GGML
|
TheBloke
| 2023-07-06T20:38:02Z | 0 | 4 | null |
[
"license:other",
"region:us"
] | null | 2023-07-06T18:47:36Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Eric Hartford's WizardLM-7B-V1.0-Uncensored GGML
These files are GGML format model files for [Eric Hartford's WizardLM-7B-V1.0-Uncensored](https://huggingface.co/ehartford/WizardLM-7B-V1.0-Uncensored).
These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev).
In order to use the increased context length, you can presently use:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later.
Support is also expected to come to llama.cpp, however work is still being done to find the optimal implementation.
To use the increased context with KoboldCpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`.
**NOTE**: Increased context length is an area seeing rapid developments and improvements. It is quite possible that these models may be superseded by new developments in the coming days. If that's the case, I will remove them, or update this README as appropriate.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-7B-V1-0-Uncensored-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/WizardLM-7B-V1-0-Uncensored-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/WizardLM-7B-V1-0-Uncensored-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/WizardLM-7B-V1.0-Uncensored)
<!-- compatibility_ggml start -->
## Compatibility
These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants.
However the increased context length won't work without specific support. See the note in the introduction for details on using increased context.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| wizardlm-7b-v1.0-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB | 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| wizardlm-7b-v1.0-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB | 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| wizardlm-7b-v1.0-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB | 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| wizardlm-7b-v1.0-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB | 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| wizardlm-7b-v1.0-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB | 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| wizardlm-7b-v1.0-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB | 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| wizardlm-7b-v1.0-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB | 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| wizardlm-7b-v1.0-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB | 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| wizardlm-7b-v1.0-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `koboldcpp`
On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096:
```
python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas --gpulayers 100 wizardlm-7b-v1.0-superhot-8k.ggmlv3.q4_K_M.bin
```
Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration.
For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: Eric Hartford's WizardLM-7B-V1.0-Uncensored
This is a retraining of https://huggingface.co/WizardLM/WizardLM-7B-V1.0 with a filtered dataset, intended to reduce refusals, avoidance, and bias.
Note that LLaMA itself has inherent ethical beliefs, so there's no such thing as a "truly uncensored" model. But this model will be more compliant than WizardLM/WizardLM-7B-V1.0.
Shout out to the open source AI/ML community, and everyone who helped me out.
Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
Unlike WizardLM/WizardLM-7B-V1.0, but like WizardLM/WizardLM-13B-V1.0 and WizardLM/WizardLM-33B-V1.0, this model is trained with Vicuna-1.1 style prompts.
```
You are a helpful AI assistant.
USER: <prompt>
ASSISTANT:
```
Thank you [chirper.ai](https://chirper.ai) for sponsoring some of my compute!
|
aroot/eng-guj-simcse_random_ssrb
|
aroot
| 2023-07-06T20:32:37Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-06T20:10:52Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_random_ssrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_random_ssrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2802
- Bleu: 2.8939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
sinny/ppo-LunarLander-v2
|
sinny
| 2023-07-06T20:30:19Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-03T10:35:57Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.00 +/- 13.38
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
cjohlmacher/unit2-taxi-overly-confident
|
cjohlmacher
| 2023-07-06T20:24:09Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T20:24:07Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: unit2-taxi-overly-confident
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="cjohlmacher/unit2-taxi-overly-confident", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
cjohlmacher/unit2-taxi-2
|
cjohlmacher
| 2023-07-06T20:20:42Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T20:18:26Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: unit2-taxi-2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="cjohlmacher/unit2-taxi-2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ericNguyen0132/roberta-large-Dep-first
|
ericNguyen0132
| 2023-07-06T19:55:21Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-04T11:41:34Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-large-Dep-first
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-Dep-first
This model is a fine-tuned version of [rafalposwiata/deproberta-large-depression](https://huggingface.co/rafalposwiata/deproberta-large-depression) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1795
- Accuracy: 0.702
- F1: 0.5706
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5563 | 1.0 | 751 | 0.5324 | 0.756 | 0.6188 |
| 0.4721 | 2.0 | 1502 | 0.6204 | 0.691 | 0.5874 |
| 0.3836 | 3.0 | 2253 | 0.7990 | 0.696 | 0.525 |
| 0.3245 | 4.0 | 3004 | 0.9714 | 0.694 | 0.5726 |
| 0.2795 | 5.0 | 3755 | 1.1795 | 0.702 | 0.5706 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
nlphug/distilbert-base-uncased-finetuned-squad
|
nlphug
| 2023-07-06T19:54:38Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-06T10:10:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
earentilt/ppo-LunarLander-v2
|
earentilt
| 2023-07-06T19:49:29Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-06T19:49:08Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 241.47 +/- 42.31
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.