modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 06:31:37
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 06:31:07
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
TheYuriLover/airoboros-13b-gpt4-1.4-GPTQ-32g-ao-ts
|
TheYuriLover
| 2023-06-27T12:07:09Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-27T08:22:46Z |
This is the gptq 4bit quantization of this model: https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.4
This quantization was made by using this repository: https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/triton
And I used the triton branch with all the gptq implementations available (true_sequential + act_order + groupsize 32)
|
youngp5/skin-conditions
|
youngp5
| 2023-06-27T11:50:31Z | 217 | 1 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"medical",
"en",
"dataset:youngp5/tumors",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-20T17:14:27Z |
---
license: mit
datasets:
- youngp5/tumors
language:
- en
metrics:
- accuracy
library_name: transformers
tags:
- medical
---
|
aidn/squadBert3Epochs
|
aidn
| 2023-06-27T11:39:42Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-27T10:47:14Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: aidn/squadBert3Epochs
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# aidn/squadBert3Epochs
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8730
- Validation Loss: 1.1031
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 8758, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.5485 | 1.1485 | 0 |
| 0.9929 | 1.1031 | 1 |
| 0.8730 | 1.1031 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
usamakenway/pygmalion-13b-4bit-128g-AutoGPTQ
|
usamakenway
| 2023-06-27T11:35:39Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-27T11:30:56Z |
---
language: en
license: other
commercial: no
inference: false
---
# pygmalion-13b-4bit-128g
## Model description
**Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
Quantized from the decoded pygmalion-13b xor format.
**https://huggingface.co/PygmalionAI/pygmalion-13b**
In safetensor format.
### Quantization Information
GPTQ CUDA quantized with: https://github.com/0cc4m/GPTQ-for-LLaMa
```
python llama.py --wbits 4 models/pygmalion-13b c4 --true-sequential --groupsize 128 --save_safetensors models/pygmalion-13b/4bit-128g.safetensors
```
|
Anmol0130/bottle_detection_june
|
Anmol0130
| 2023-06-27T11:25:56Z | 189 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-27T11:25:49Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: bottle_detection_june
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.84375
---
# bottle_detection_june
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Dewar's_12_Years

#### Dewar's_white_lable

#### bacardi_black

#### bacardi_carta_blanca

#### bacardi_carta_negra

#### bacardi_carta_oro

#### bombay_sapphire

#### coka_cola

#### martini

|
ahishamm/vit-huge-PH2-patch-14
|
ahishamm
| 2023-06-27T11:21:19Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-27T11:18:25Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-huge-PH2-patch-14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-huge-PH2-patch-14
This model is a fine-tuned version of [google/vit-huge-patch14-224-in21k](https://huggingface.co/google/vit-huge-patch14-224-in21k) on the ahishamm/ph2_vit_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3385
- Accuracy: 0.875
- Recall: 0.875
- F1: 0.875
- Precision: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ahishamm/vit-large-PH2-patch-32
|
ahishamm
| 2023-06-27T11:17:59Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-27T11:16:18Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-large-PH2-patch-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-PH2-patch-32
This model is a fine-tuned version of [google/vit-large-patch32-224-in21k](https://huggingface.co/google/vit-large-patch32-224-in21k) on the ahishamm/ph2_vit_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4610
- Accuracy: 0.85
- Recall: 0.85
- F1: 0.85
- Precision: 0.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ahishamm/vit-base-PH2-patch-16
|
ahishamm
| 2023-06-27T11:13:02Z | 200 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-27T11:11:59Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-base-PH2-patch-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-PH2-patch-16
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the ahishamm/ph2_vit_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3796
- Accuracy: 0.85
- Recall: 0.85
- F1: 0.85
- Precision: 0.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ahishamm/vit-large-PH2-sharpened-patch-16
|
ahishamm
| 2023-06-27T11:05:10Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-27T11:02:08Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-large-PH2-sharpened-patch-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-PH2-sharpened-patch-16
This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the ahishamm/PH2_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3520
- Accuracy: 0.875
- Recall: 0.875
- F1: 0.875
- Precision: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ahishamm/vit-large-PH2-sharpened-patch-32
|
ahishamm
| 2023-06-27T10:54:39Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-27T10:51:42Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-large-PH2-sharpened-patch-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-PH2-sharpened-patch-32
This model is a fine-tuned version of [google/vit-large-patch32-224-in21k](https://huggingface.co/google/vit-large-patch32-224-in21k) on the ahishamm/PH2_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0309
- Accuracy: 1.0
- Recall: 1.0
- F1: 1.0
- Precision: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
antuuuu/belinaa
|
antuuuu
| 2023-06-27T10:49:06Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-27T10:44:35Z |
---
license: creativeml-openrail-m
---
|
microsoft/swin-base-patch4-window7-224-in22k
|
microsoft
| 2023-06-27T10:46:44Z | 10,959 | 15 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"swin",
"image-classification",
"vision",
"dataset:imagenet-21k",
"arxiv:2103.14030",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-21k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Swin Transformer (large-sized model)
Swin Transformer model pre-trained on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224. It was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer).
Disclaimer: The team releasing Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally.

[Source](https://paperswithcode.com/method/swin-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swin) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, SwinForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained("microsoft/swin-base-patch4-window7-224-in22k")
model = SwinForImageClassification.from_pretrained("microsoft/swin-base-patch4-window7-224-in22k")
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swin.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2103-14030,
author = {Ze Liu and
Yutong Lin and
Yue Cao and
Han Hu and
Yixuan Wei and
Zheng Zhang and
Stephen Lin and
Baining Guo},
title = {Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
journal = {CoRR},
volume = {abs/2103.14030},
year = {2021},
url = {https://arxiv.org/abs/2103.14030},
eprinttype = {arXiv},
eprint = {2103.14030},
timestamp = {Thu, 08 Apr 2021 07:53:26 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2103-14030.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
ahishamm/vit-base-PH2-sharpened-patch-16
|
ahishamm
| 2023-06-27T10:44:56Z | 217 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-27T10:43:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: vit-base-PH2-sharpened-patch-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-PH2-sharpened-patch-16
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
antuuuu/olp
|
antuuuu
| 2023-06-27T10:44:13Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-27T10:38:29Z |
---
license: creativeml-openrail-m
---
|
Lincolntgl/Firstmodel
|
Lincolntgl
| 2023-06-27T10:12:33Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-27T10:12:33Z |
---
license: creativeml-openrail-m
---
|
CAiRE/SER-wav2vec2-large-xlsr-53-eng-zho-adults
|
CAiRE
| 2023-06-27T10:11:14Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"speech-emotion-recognition",
"audio-classification",
"en",
"zh",
"dataset:Ar4ikov/iemocap_audio_text_splitted",
"arxiv:2306.14517",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-06-27T09:57:17Z |
---
license: cc-by-sa-4.0
datasets:
- Ar4ikov/iemocap_audio_text_splitted
language:
- en
- zh
metrics:
- f1
library_name: transformers
pipeline_tag: audio-classification
tags:
- speech-emotion-recognition
---
# Cross-Lingual Cross-Age Group Adaptation for Low-Resource Elderly Speech Emotion Recognition
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on English and Chinese data from adult speakers.
The model is trained on the training sets of [CREMA-D](https://github.com/CheyneyComputerScience/CREMA-D), [ESD](https://github.com/HLTSingapore/Emotional-Speech-Data), [IEMOCAP](https://sail.usc.edu/iemocap/iemocap_release.htm), and [TESS](https://www.kaggle.com/datasets/ejlok1/toronto-emotional-speech-set-tess).
When using this model, make sure that your speech input is sampled at 16kHz.
The scripts used for training and evaluation can be found here:
[https://github.com/HLTCHKUST/elderly_ser/tree/main](https://github.com/HLTCHKUST/elderly_ser/tree/main)
## Evaluation Results
For the details (e.g., the statistics of `train`, `valid`, and `test` data), please refer to our paper on [arXiv](https://arxiv.org/abs/2306.14517).
It also provides the model's speech emotion recognition performances on: English-All, Chinese-All, English-Elderly, Chinese-Elderly, English-Adults, Chinese-Adults.
## Citation
Our paper will be published at INTERSPEECH 2023. In the meantime, you can find our paper on [arXiv](https://arxiv.org/abs/2306.14517).
If you find our work useful, please consider citing our paper as follows:
```
@misc{cahyawijaya2023crosslingual,
title={Cross-Lingual Cross-Age Group Adaptation for Low-Resource Elderly Speech Emotion Recognition},
author={Samuel Cahyawijaya and Holy Lovenia and Willy Chung and Rita Frieske and Zihan Liu and Pascale Fung},
year={2023},
eprint={2306.14517},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
CAiRE/SER-wav2vec2-large-xlsr-53-eng-zho-elderly
|
CAiRE
| 2023-06-27T10:09:13Z | 167 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"speech-emotion-recognition",
"audio-classification",
"en",
"zh",
"arxiv:2306.14517",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-06-27T09:45:28Z |
---
license: cc-by-sa-4.0
language:
- en
- zh
metrics:
- f1
library_name: transformers
pipeline_tag: audio-classification
tags:
- speech-emotion-recognition
---
# Cross-Lingual Cross-Age Group Adaptation for Low-Resource Elderly Speech Emotion Recognition
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on English and Chinese data from elderly speakers.
The model is trained on the training sets of [CREMA-D](https://github.com/CheyneyComputerScience/CREMA-D), [CSED](https://github.com/AkishinoShiame/Chinese-Speech-Emotion-Datasets), [ElderReact](https://github.com/Mayer123/ElderReact), and [TESS](https://www.kaggle.com/datasets/ejlok1/toronto-emotional-speech-set-tess).
When using this model, make sure that your speech input is sampled at 16kHz.
The scripts used for training and evaluation can be found here:
[https://github.com/HLTCHKUST/elderly_ser/tree/main](https://github.com/HLTCHKUST/elderly_ser/tree/main)
## Evaluation Results
For the details (e.g., the statistics of `train`, `valid`, and `test` data), please refer to our paper on [arXiv](https://arxiv.org/abs/2306.14517).
It also provides the model's speech emotion recognition performances on: English-All, Chinese-All, English-Elderly, Chinese-Elderly, English-Adults, Chinese-Adults.
## Citation
Our paper will be published at INTERSPEECH 2023. In the meantime, you can find our paper on [arXiv](https://arxiv.org/abs/2306.14517).
If you find our work useful, please consider citing our paper as follows:
```
@misc{cahyawijaya2023crosslingual,
title={Cross-Lingual Cross-Age Group Adaptation for Low-Resource Elderly Speech Emotion Recognition},
author={Samuel Cahyawijaya and Holy Lovenia and Willy Chung and Rita Frieske and Zihan Liu and Pascale Fung},
year={2023},
eprint={2306.14517},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Intel/xlm-roberta-base-mrpc-int8-dynamic-inc
|
Intel
| 2023-06-27T10:01:34Z | 5 | 0 |
transformers
|
[
"transformers",
"onnx",
"xlm-roberta",
"text-classification",
"text-classfication",
"int8",
"Intel® Neural Compressor",
"PostTrainingDynamic",
"en",
"dataset:mrpc",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-12-28T07:33:55Z |
---
language: en
license: mit
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- PostTrainingDynamic
- onnx
datasets:
- mrpc
metrics:
- f1
---
# INT8 xlm-roberta base finetuned MRPC
## Post-training dynamic quantization
### ONNX
This is an INT8 ONNX model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [Intel/xlm-roberta-base-mrpc](https://huggingface.co/Intel/xlm-roberta-base-mrpc).
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.8966|0.9010|
| **Model size (MB)** |354|1061|
#### Load ONNX model:
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
model = ORTModelForSequenceClassification.from_pretrained('Intel/xlm-roberta-base-mrpc-int8-dynamic')
```
|
mialiam/layoutlm-funsd
|
mialiam
| 2023-06-27T09:57:54Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlm",
"token-classification",
"generated_from_trainer",
"dataset:funsd",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-26T15:11:42Z |
---
tags:
- generated_from_trainer
datasets:
- funsd
model-index:
- name: layoutlm-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6993
- Answer: {'precision': 0.7155172413793104, 'recall': 0.8207663782447466, 'f1': 0.7645365572826713, 'number': 809}
- Header: {'precision': 0.2781954887218045, 'recall': 0.31092436974789917, 'f1': 0.2936507936507936, 'number': 119}
- Question: {'precision': 0.783303730017762, 'recall': 0.828169014084507, 'f1': 0.8051118210862619, 'number': 1065}
- Overall Precision: 0.7238
- Overall Recall: 0.7943
- Overall F1: 0.7574
- Overall Accuracy: 0.8095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.7894 | 1.0 | 10 | 1.6149 | {'precision': 0.029508196721311476, 'recall': 0.03337453646477132, 'f1': 0.031322505800464036, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.18211920529801323, 'recall': 0.15492957746478872, 'f1': 0.167427701674277, 'number': 1065} | 0.1053 | 0.0963 | 0.1006 | 0.3666 |
| 1.4628 | 2.0 | 20 | 1.2718 | {'precision': 0.21764705882352942, 'recall': 0.22867737948084055, 'f1': 0.2230259192284509, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.4429190751445087, 'recall': 0.5755868544600939, 'f1': 0.5006124948958759, 'number': 1065} | 0.3572 | 0.4004 | 0.3776 | 0.5813 |
| 1.1079 | 3.0 | 30 | 0.9869 | {'precision': 0.42190889370932755, 'recall': 0.48084054388133496, 'f1': 0.4494511842865395, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.6041666666666666, 'recall': 0.6807511737089202, 'f1': 0.640176600441501, 'number': 1065} | 0.5215 | 0.5590 | 0.5396 | 0.6898 |
| 0.8376 | 4.0 | 40 | 0.8064 | {'precision': 0.6006036217303823, 'recall': 0.7379480840543882, 'f1': 0.6622296173044925, 'number': 809} | {'precision': 0.04918032786885246, 'recall': 0.025210084033613446, 'f1': 0.03333333333333334, 'number': 119} | {'precision': 0.6531302876480541, 'recall': 0.7248826291079812, 'f1': 0.6871384067645749, 'number': 1065} | 0.6133 | 0.6884 | 0.6487 | 0.7512 |
| 0.6793 | 5.0 | 50 | 0.7442 | {'precision': 0.6339468302658486, 'recall': 0.7663782447466008, 'f1': 0.693900391717963, 'number': 809} | {'precision': 0.15306122448979592, 'recall': 0.12605042016806722, 'f1': 0.1382488479262673, 'number': 119} | {'precision': 0.7100802854594113, 'recall': 0.7474178403755869, 'f1': 0.7282708142726441, 'number': 1065} | 0.6513 | 0.7180 | 0.6831 | 0.7720 |
| 0.5643 | 6.0 | 60 | 0.6937 | {'precision': 0.6551373346897253, 'recall': 0.796044499381953, 'f1': 0.7187499999999999, 'number': 809} | {'precision': 0.24175824175824176, 'recall': 0.18487394957983194, 'f1': 0.20952380952380953, 'number': 119} | {'precision': 0.71, 'recall': 0.8, 'f1': 0.752317880794702, 'number': 1065} | 0.6675 | 0.7617 | 0.7115 | 0.7895 |
| 0.4869 | 7.0 | 70 | 0.6780 | {'precision': 0.676130389064143, 'recall': 0.7948084054388134, 'f1': 0.7306818181818182, 'number': 809} | {'precision': 0.2072072072072072, 'recall': 0.19327731092436976, 'f1': 0.2, 'number': 119} | {'precision': 0.7147568013190437, 'recall': 0.8140845070422535, 'f1': 0.7611940298507464, 'number': 1065} | 0.6738 | 0.7692 | 0.7184 | 0.7962 |
| 0.439 | 8.0 | 80 | 0.6706 | {'precision': 0.696068012752391, 'recall': 0.8096415327564895, 'f1': 0.7485714285714284, 'number': 809} | {'precision': 0.2184873949579832, 'recall': 0.2184873949579832, 'f1': 0.2184873949579832, 'number': 119} | {'precision': 0.7454858125537404, 'recall': 0.8140845070422535, 'f1': 0.7782764811490125, 'number': 1065} | 0.6964 | 0.7767 | 0.7343 | 0.8022 |
| 0.3922 | 9.0 | 90 | 0.6689 | {'precision': 0.707742639040349, 'recall': 0.8022249690976514, 'f1': 0.7520278099652375, 'number': 809} | {'precision': 0.21774193548387097, 'recall': 0.226890756302521, 'f1': 0.2222222222222222, 'number': 119} | {'precision': 0.7601380500431406, 'recall': 0.8272300469483568, 'f1': 0.7922661870503598, 'number': 1065} | 0.7077 | 0.7812 | 0.7427 | 0.8038 |
| 0.3518 | 10.0 | 100 | 0.6692 | {'precision': 0.7065677966101694, 'recall': 0.8244746600741656, 'f1': 0.7609811751283514, 'number': 809} | {'precision': 0.23529411764705882, 'recall': 0.23529411764705882, 'f1': 0.23529411764705882, 'number': 119} | {'precision': 0.7663469921534438, 'recall': 0.8253521126760563, 'f1': 0.7947558770343581, 'number': 1065} | 0.7122 | 0.7898 | 0.7490 | 0.8092 |
| 0.3165 | 11.0 | 110 | 0.6863 | {'precision': 0.714902807775378, 'recall': 0.8182941903584673, 'f1': 0.7631123919308358, 'number': 809} | {'precision': 0.2631578947368421, 'recall': 0.29411764705882354, 'f1': 0.27777777777777773, 'number': 119} | {'precision': 0.7724867724867724, 'recall': 0.8225352112676056, 'f1': 0.7967257844474761, 'number': 1065} | 0.7173 | 0.7893 | 0.7516 | 0.8083 |
| 0.3043 | 12.0 | 120 | 0.6898 | {'precision': 0.7173678532901834, 'recall': 0.8220024721878862, 'f1': 0.7661290322580644, 'number': 809} | {'precision': 0.27692307692307694, 'recall': 0.3025210084033613, 'f1': 0.2891566265060241, 'number': 119} | {'precision': 0.7812223206377326, 'recall': 0.828169014084507, 'f1': 0.8040109389243391, 'number': 1065} | 0.7242 | 0.7943 | 0.7576 | 0.8084 |
| 0.2853 | 13.0 | 130 | 0.6935 | {'precision': 0.7167755991285403, 'recall': 0.8133498145859085, 'f1': 0.7620150550086855, 'number': 809} | {'precision': 0.30303030303030304, 'recall': 0.33613445378151263, 'f1': 0.3187250996015936, 'number': 119} | {'precision': 0.7855251544571933, 'recall': 0.8356807511737089, 'f1': 0.8098271155595996, 'number': 1065} | 0.7274 | 0.7968 | 0.7605 | 0.8109 |
| 0.2724 | 14.0 | 140 | 0.6985 | {'precision': 0.7212581344902386, 'recall': 0.8220024721878862, 'f1': 0.7683419988445985, 'number': 809} | {'precision': 0.2900763358778626, 'recall': 0.31932773109243695, 'f1': 0.304, 'number': 119} | {'precision': 0.786096256684492, 'recall': 0.828169014084507, 'f1': 0.8065843621399177, 'number': 1065} | 0.7287 | 0.7953 | 0.7606 | 0.8091 |
| 0.2741 | 15.0 | 150 | 0.6993 | {'precision': 0.7155172413793104, 'recall': 0.8207663782447466, 'f1': 0.7645365572826713, 'number': 809} | {'precision': 0.2781954887218045, 'recall': 0.31092436974789917, 'f1': 0.2936507936507936, 'number': 119} | {'precision': 0.783303730017762, 'recall': 0.828169014084507, 'f1': 0.8051118210862619, 'number': 1065} | 0.7238 | 0.7943 | 0.7574 | 0.8095 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
michaelfeil/ct2fast-starcoder
|
michaelfeil
| 2023-06-27T09:50:37Z | 22 | 13 |
transformers
|
[
"transformers",
"gpt_bigcode",
"text-generation",
"ctranslate2",
"int8",
"float16",
"code",
"dataset:bigcode/the-stack-dedup",
"arxiv:1911.02150",
"arxiv:2205.14135",
"arxiv:2207.14255",
"arxiv:2305.06161",
"license:bigcode-openrail-m",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-23T00:18:05Z |
---
pipeline_tag: text-generation
inference: true
widget:
- text: 'def print_hello_world():'
example_title: Hello world
group: Python
license: bigcode-openrail-m
datasets:
- bigcode/the-stack-dedup
metrics:
- code_eval
library_name: transformers
tags:
- ctranslate2
- int8
- float16
- code
model-index:
- name: StarCoder
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval (Prompted)
metrics:
- name: pass@1
type: pass@1
value: 0.408
verified: false
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.336
verified: false
- task:
type: text-generation
dataset:
type: mbpp
name: MBPP
metrics:
- name: pass@1
type: pass@1
value: 0.527
verified: false
- task:
type: text-generation
dataset:
type: ds1000
name: DS-1000 (Overall Completion)
metrics:
- name: pass@1
type: pass@1
value: 0.26
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (C++)
metrics:
- name: pass@1
type: pass@1
value: 0.3155
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (C#)
metrics:
- name: pass@1
type: pass@1
value: 0.2101
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (D)
metrics:
- name: pass@1
type: pass@1
value: 0.1357
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Go)
metrics:
- name: pass@1
type: pass@1
value: 0.1761
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Java)
metrics:
- name: pass@1
type: pass@1
value: 0.3022
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Julia)
metrics:
- name: pass@1
type: pass@1
value: 0.2302
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 0.3079
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Lua)
metrics:
- name: pass@1
type: pass@1
value: 0.2389
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (PHP)
metrics:
- name: pass@1
type: pass@1
value: 0.2608
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Perl)
metrics:
- name: pass@1
type: pass@1
value: 0.1734
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Python)
metrics:
- name: pass@1
type: pass@1
value: 0.3357
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (R)
metrics:
- name: pass@1
type: pass@1
value: 0.155
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Ruby)
metrics:
- name: pass@1
type: pass@1
value: 0.0124
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Racket)
metrics:
- name: pass@1
type: pass@1
value: 0.0007
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Rust)
metrics:
- name: pass@1
type: pass@1
value: 0.2184
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Scala)
metrics:
- name: pass@1
type: pass@1
value: 0.2761
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Bash)
metrics:
- name: pass@1
type: pass@1
value: 0.1046
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Swift)
metrics:
- name: pass@1
type: pass@1
value: 0.2274
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (TypeScript)
metrics:
- name: pass@1
type: pass@1
value: 0.3229
verified: false
extra_gated_prompt: >-
## Model License Agreement
Please read the BigCode [OpenRAIL-M
license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
agreement before accepting it.
extra_gated_fields:
I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [bigcode/starcoder](https://huggingface.co/bigcode/starcoder)
```bash
pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.16.0
```
```python
# from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-starcoder"
from hf_hub_ctranslate2 import GeneratorCT2fromHfHub
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("{ORG}/{NAME}")
)
outputs = model.generate(
text=["def fibonnaci(", "User: How are you doing? Bot:"],
max_length=64,
include_prompt_in_result=False
)
print(outputs)
```
Checkpoint compatible to [ctranslate2>=3.16.0](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
Converted on 2023-06-27 using
```
ct2-transformers-converter --model bigcode/starcoder --output_dir ~/tmp-ct2fast-starcoder --force --copy_files merges.txt tokenizer.json README.md tokenizer_config.json vocab.json generation_config.json special_tokens_map.json .gitattributes --quantization int8_float16 --trust_remote_code
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# StarCoder

Play with the model on the [StarCoder Playground](https://huggingface.co/spaces/bigcode/bigcode-playground).
## Table of Contents
1. [Model Summary](##model-summary)
2. [Use](##use)
3. [Limitations](##limitations)
4. [Training](##training)
5. [License](##license)
6. [Citation](##citation)
## Model Summary
The StarCoder models are 15.5B parameter models trained on 80+ programming languages from [The Stack (v1.2)](https://huggingface.co/datasets/bigcode/the-stack), with opt-out requests excluded. The model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150), [a context window of 8192 tokens](https://arxiv.org/abs/2205.14135), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 1 trillion tokens.
- **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
- **Paper:** [💫StarCoder: May the source be with you!](https://arxiv.org/abs/2305.06161)
- **Point of Contact:** [contact@bigcode-project.org](mailto:contact@bigcode-project.org)
- **Languages:** 80+ Programming languages
## Use
### Intended use
The model was trained on GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well. However, by using the [Tech Assistant prompt](https://huggingface.co/datasets/bigcode/ta-prompt) you can turn it into a capable technical assistant.
**Feel free to share your generations in the Community tab!**
### Generation
```python
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigcode/starcoder"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
### Fill-in-the-middle
Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:
```python
input_text = "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world!')<fim_middle>"
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
### Attribution & Other Requirements
The pretraining dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/starcoder-search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
# Limitations
The model has been trained on source code from 80+ programming languages. The predominant natural language in source code is English although other languages are also present. As such the model is capable of generating code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits. See [the paper](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) for an in-depth discussion of the model limitations.
# Training
## Model
- **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective
- **Pretraining steps:** 250k
- **Pretraining tokens:** 1 trillion
- **Precision:** bfloat16
## Hardware
- **GPUs:** 512 Tesla A100
- **Training time:** 24 days
## Software
- **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# License
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
# Citation
```
@article{li2023starcoder,
title={StarCoder: may the source be with you!},
author={Raymond Li and Loubna Ben Allal and Yangtian Zi and Niklas Muennighoff and Denis Kocetkov and Chenghao Mou and Marc Marone and Christopher Akiki and Jia Li and Jenny Chim and Qian Liu and Evgenii Zheltonozhskii and Terry Yue Zhuo and Thomas Wang and Olivier Dehaene and Mishig Davaadorj and Joel Lamy-Poirier and João Monteiro and Oleh Shliazhko and Nicolas Gontier and Nicholas Meade and Armel Zebaze and Ming-Ho Yee and Logesh Kumar Umapathi and Jian Zhu and Benjamin Lipkin and Muhtasham Oblokulov and Zhiruo Wang and Rudra Murthy and Jason Stillerman and Siva Sankalp Patel and Dmitry Abulkhanov and Marco Zocca and Manan Dey and Zhihan Zhang and Nour Fahmy and Urvashi Bhattacharyya and Wenhao Yu and Swayam Singh and Sasha Luccioni and Paulo Villegas and Maxim Kunakov and Fedor Zhdanov and Manuel Romero and Tony Lee and Nadav Timor and Jennifer Ding and Claire Schlesinger and Hailey Schoelkopf and Jan Ebert and Tri Dao and Mayank Mishra and Alex Gu and Jennifer Robinson and Carolyn Jane Anderson and Brendan Dolan-Gavitt and Danish Contractor and Siva Reddy and Daniel Fried and Dzmitry Bahdanau and Yacine Jernite and Carlos Muñoz Ferrandis and Sean Hughes and Thomas Wolf and Arjun Guha and Leandro von Werra and Harm de Vries},
year={2023},
eprint={2305.06161},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
hongrui/mammogram_v_2_2
|
hongrui
| 2023-06-27T09:48:52Z | 2 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-26T22:46:35Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - hongrui/mammogram_v_2_2
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the hongrui/mammogram_v_1 dataset. You can find some example images in the following.




|
arildgrimstveit/vicuna7b
|
arildgrimstveit
| 2023-06-27T09:43:58Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-27T08:36:21Z |
---
inference: false
---
**NOTE: New version available**
Please check out a newer version of the weights [here](https://huggingface.co/lmsys/vicuna-7b-v1.3).
If you still want to use this old version, please see the compatibility and difference between different versions [here](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md).
**NOTE: This "delta model" cannot be used directly.**
Users have to apply it on top of the original LLaMA weights to get actual Vicuna weights. See [instructions](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md#how-to-apply-delta-weights-for-weights-v11-and-v0).
<br>
<br>
# Vicuna Model Card
## Model details
**Model type:**
Vicuna is an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
Vicuna was trained between March 2023 and April 2023.
**Organizations developing the model:**
The Vicuna team with members from UC Berkeley, CMU, Stanford, and UC San Diego.
**Paper or resources for more information:**
https://lmsys.org/blog/2023-03-30-vicuna/
**Where to send questions or comments about the model:**
https://github.com/lm-sys/FastChat/issues
## Intended use
**Primary intended uses:**
The primary use of Vicuna is research on large language models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## Training dataset
70K conversations collected from ShareGPT.com.
## Evaluation dataset
A preliminary evaluation of the model quality is conducted by creating a set of 80 diverse questions and utilizing GPT-4 to judge the model outputs.
See https://lmsys.org/blog/2023-03-30-vicuna/ for more details.
|
apparaomulpuri/alpaca-HJ-model
|
apparaomulpuri
| 2023-06-27T09:39:37Z | 6 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-27T05:12:19Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
yeounyi/Taxi-v3
|
yeounyi
| 2023-06-27T09:31:05Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T09:31:03Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.62
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="yeounyi/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
nielsr/segformer-finetuned-sidewalk
|
nielsr
| 2023-06-27T09:10:51Z | 169 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"dataset:segments/sidewalk-semantic",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2022-04-06T09:56:13Z |
---
license: apache-2.0
tags:
- vision
- image-segmentation
datasets:
- segments/sidewalk-semantic
widget:
- src: https://segmentsai-prod.s3.eu-west-2.amazonaws.com/assets/admin-tobias/439f6843-80c5-47ce-9b17-0b2a1d54dbeb.jpg
example_title: Brugge
---
# Segformer-b0, fine-tuned on Sidewalk
This repository contains the weights of a `SegFormerForSemanticSegmentation` model.
It was trained using the example script.
|
sertemo/distilbert-base-uncased-finetuned-imdb
|
sertemo
| 2023-06-27T09:02:02Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-27T08:02:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4192
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6249 | 1.0 | 1250 | 2.4327 |
| 2.5109 | 2.0 | 2500 | 2.4115 |
| 2.4577 | 3.0 | 3750 | 2.3792 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
crcdng/distilhubert-finetuned-gtzan
|
crcdng
| 2023-06-27T09:00:39Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-06-26T21:21:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.8823529411764706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8092
- Accuracy: 0.8824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5308 | 1.0 | 38 | 1.4348 | 0.6471 |
| 1.0143 | 2.0 | 76 | 0.9504 | 0.8824 |
| 0.8684 | 3.0 | 114 | 0.8092 | 0.8824 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Rryay12/ppo-SnowballTarget
|
Rryay12
| 2023-06-27T08:58:22Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-27T08:48:38Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Rryay12/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
SHENMU007/neunit-changchun-20230626V2
|
SHENMU007
| 2023-06-27T08:57:33Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-06-27T05:55:50Z |
---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: neunit-changchun-20230626V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# neunit-changchun-20230626V2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0047 | 1.0 | 3303 | 0.0019 | 0.9997 |
| 0.0029 | 2.0 | 6606 | 0.0010 | 0.9996 |
| 0.0044 | 3.0 | 9909 | 0.0003 | 0.9999 |
| 0.0006 | 4.0 | 13212 | 0.0000 | 1.0 |
| 0.0 | 5.0 | 16515 | 0.0001 | 1.0000 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Skwang/seungwan
|
Skwang
| 2023-06-27T08:55:04Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-27T08:45:35Z |
---
license: creativeml-openrail-m
---
|
florentgbelidji/blip_captioning
|
florentgbelidji
| 2023-06-27T08:52:34Z | 0 | 7 |
generic
|
[
"generic",
"image-to-text",
"image-captioning",
"endpoints-template",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2022-08-04T22:20:58Z |
---
tags:
- image-to-text
- image-captioning
- endpoints-template
license: bsd-3-clause
library_name: generic
---
# Fork of [salesforce/BLIP](https://github.com/salesforce/BLIP) for a `image-captioning` task on 🤗Inference endpoint.
This repository implements a `custom` task for `image-captioning` for 🤗 Inference Endpoints. The code for the customized pipeline is in the [pipeline.py](https://huggingface.co/florentgbelidji/blip_captioning/blob/main/pipeline.py).
To use deploy this model a an Inference Endpoint you have to select `Custom` as task to use the `pipeline.py` file. -> _double check if it is selected_
### expected Request payload
```json
{
"image": "/9j/4AAQSkZJRgABAQEBLAEsAAD/2wBDAAMCAgICAgMC....", // base64 image as bytes
}
```
below is an example on how to run a request using Python and `requests`.
## Run Request
1. prepare an image.
```bash
!wget https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
```
2.run request
```python
import json
from typing import List
import requests as r
import base64
ENDPOINT_URL = ""
HF_TOKEN = ""
def predict(path_to_image: str = None):
with open(path_to_image, "rb") as i:
image = i.read()
payload = {
"inputs": [image],
"parameters": {
"do_sample": True,
"top_p":0.9,
"min_length":5,
"max_length":20
}
}
response = r.post(
ENDPOINT_URL, headers={"Authorization": f"Bearer {HF_TOKEN}"}, json=payload
)
return response.json()
prediction = predict(
path_to_image="palace.jpg"
)
```
Example parameters depending on the decoding strategy:
1. Beam search
```
"parameters": {
"num_beams":5,
"max_length":20
}
```
2. Nucleus sampling
```
"parameters": {
"num_beams":1,
"max_length":20,
"do_sample": True,
"top_k":50,
"top_p":0.95
}
```
3. Contrastive search
```
"parameters": {
"penalty_alpha":0.6,
"top_k":4
"max_length":512
}
```
See [generate()](https://huggingface.co/docs/transformers/v4.25.1/en/main_classes/text_generation#transformers.GenerationMixin.generate) doc for additional detail
expected output
```python
['buckingham palace with flower beds and red flowers']
```
|
berluk/cow-detection
|
berluk
| 2023-06-27T08:51:37Z | 4 | 0 |
tf-keras
|
[
"tf-keras",
"image-classification",
"region:us"
] |
image-classification
| 2023-06-16T13:05:21Z |
---
pipeline_tag: image-classification
---
|
HxLab/q-Taxi-v3
|
HxLab
| 2023-06-27T08:41:45Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T08:41:42Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="HxLab/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
joohwan/chanhyuk-gd
|
joohwan
| 2023-06-27T08:40:39Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-27T07:12:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: chanhyuk-gd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chanhyuk-gd
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0837
- Wer: 9.9533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 3000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.246 | 0.18 | 500 | 0.2557 | 24.6951 |
| 0.1363 | 0.36 | 1000 | 0.1898 | 18.1750 |
| 0.094 | 0.54 | 1500 | 0.1450 | 14.4255 |
| 0.0842 | 0.72 | 2000 | 0.1100 | 15.4495 |
| 0.0595 | 0.9 | 2500 | 0.0916 | 10.6008 |
| 0.0141 | 1.08 | 3000 | 0.0837 | 9.9533 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
yeounyi/PPO-LunarLander-v2
|
yeounyi
| 2023-06-27T08:39:37Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T07:41:53Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 293.31 +/- 19.97
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Ankit15nov/bloomz-3b
|
Ankit15nov
| 2023-06-27T08:39:03Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-27T08:39:02Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
ShubhLM/my-new-model-id
|
ShubhLM
| 2023-06-27T08:29:41Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlm",
"token-classification",
"generated_from_trainer",
"dataset:funsd",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-27T06:29:23Z |
---
tags:
- generated_from_trainer
datasets:
- funsd
model-index:
- name: my-new-model-id
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-new-model-id
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6585
- Answer: {'precision': 0.7224043715846995, 'recall': 0.8170580964153276, 'f1': 0.7668213457076567, 'number': 809}
- Header: {'precision': 0.2777777777777778, 'recall': 0.33613445378151263, 'f1': 0.30418250950570347, 'number': 119}
- Question: {'precision': 0.7818343722172751, 'recall': 0.8244131455399061, 'f1': 0.8025594149908593, 'number': 1065}
- Overall Precision: 0.7236
- Overall Recall: 0.7923
- Overall F1: 0.7564
- Overall Accuracy: 0.8164
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.8057 | 1.0 | 10 | 1.5463 | {'precision': 0.016229712858926344, 'recall': 0.016069221260815822, 'f1': 0.01614906832298137, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.16498993963782696, 'recall': 0.07699530516431925, 'f1': 0.10499359795134443, 'number': 1065} | 0.0732 | 0.0477 | 0.0577 | 0.3858 |
| 1.4605 | 2.0 | 20 | 1.2487 | {'precision': 0.24860335195530725, 'recall': 0.3300370828182942, 'f1': 0.2835900159320234, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.45215485756026297, 'recall': 0.5812206572769953, 'f1': 0.5086277732128184, 'number': 1065} | 0.3627 | 0.4446 | 0.3995 | 0.5992 |
| 1.1219 | 3.0 | 30 | 0.9461 | {'precision': 0.45073375262054505, 'recall': 0.5315203955500618, 'f1': 0.4878048780487805, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.5785597381342062, 'recall': 0.6638497652582159, 'f1': 0.6182772190642762, 'number': 1065} | 0.5204 | 0.5705 | 0.5443 | 0.6840 |
| 0.847 | 4.0 | 40 | 0.7978 | {'precision': 0.5685131195335277, 'recall': 0.723114956736712, 'f1': 0.6365614798694234, 'number': 809} | {'precision': 0.07142857142857142, 'recall': 0.025210084033613446, 'f1': 0.037267080745341616, 'number': 119} | {'precision': 0.6571180555555556, 'recall': 0.7107981220657277, 'f1': 0.6829048263419035, 'number': 1065} | 0.6050 | 0.6749 | 0.6380 | 0.7407 |
| 0.6811 | 5.0 | 50 | 0.7134 | {'precision': 0.6322444678609063, 'recall': 0.7416563658838071, 'f1': 0.6825938566552902, 'number': 809} | {'precision': 0.2328767123287671, 'recall': 0.14285714285714285, 'f1': 0.17708333333333334, 'number': 119} | {'precision': 0.7031924072476272, 'recall': 0.7652582159624414, 'f1': 0.7329136690647481, 'number': 1065} | 0.6566 | 0.7185 | 0.6862 | 0.7763 |
| 0.5706 | 6.0 | 60 | 0.6581 | {'precision': 0.663820704375667, 'recall': 0.7688504326328801, 'f1': 0.7124856815578464, 'number': 809} | {'precision': 0.23376623376623376, 'recall': 0.15126050420168066, 'f1': 0.1836734693877551, 'number': 119} | {'precision': 0.7145187601957586, 'recall': 0.8225352112676056, 'f1': 0.7647315582714972, 'number': 1065} | 0.6768 | 0.7607 | 0.7163 | 0.7990 |
| 0.5016 | 7.0 | 70 | 0.6413 | {'precision': 0.6694386694386695, 'recall': 0.796044499381953, 'f1': 0.7272727272727273, 'number': 809} | {'precision': 0.19626168224299065, 'recall': 0.17647058823529413, 'f1': 0.18584070796460178, 'number': 119} | {'precision': 0.7557446808510638, 'recall': 0.8338028169014085, 'f1': 0.7928571428571429, 'number': 1065} | 0.6921 | 0.7792 | 0.7331 | 0.8033 |
| 0.4435 | 8.0 | 80 | 0.6286 | {'precision': 0.6945031712473573, 'recall': 0.8121137206427689, 'f1': 0.7487179487179487, 'number': 809} | {'precision': 0.23931623931623933, 'recall': 0.23529411764705882, 'f1': 0.23728813559322035, 'number': 119} | {'precision': 0.7668122270742358, 'recall': 0.8244131455399061, 'f1': 0.7945701357466064, 'number': 1065} | 0.7079 | 0.7842 | 0.7441 | 0.8108 |
| 0.4078 | 9.0 | 90 | 0.6405 | {'precision': 0.6957470010905126, 'recall': 0.788627935723115, 'f1': 0.7392815758980301, 'number': 809} | {'precision': 0.2711864406779661, 'recall': 0.2689075630252101, 'f1': 0.270042194092827, 'number': 119} | {'precision': 0.7794508414526129, 'recall': 0.8262910798122066, 'f1': 0.8021877848678213, 'number': 1065} | 0.7163 | 0.7777 | 0.7457 | 0.8137 |
| 0.3657 | 10.0 | 100 | 0.6364 | {'precision': 0.7142857142857143, 'recall': 0.8096415327564895, 'f1': 0.7589803012746235, 'number': 809} | {'precision': 0.2807017543859649, 'recall': 0.2689075630252101, 'f1': 0.27467811158798283, 'number': 119} | {'precision': 0.7870452528837621, 'recall': 0.8328638497652582, 'f1': 0.8093065693430658, 'number': 1065} | 0.7294 | 0.7898 | 0.7584 | 0.8098 |
| 0.335 | 11.0 | 110 | 0.6427 | {'precision': 0.7027896995708155, 'recall': 0.8096415327564895, 'f1': 0.7524411257897761, 'number': 809} | {'precision': 0.26865671641791045, 'recall': 0.3025210084033613, 'f1': 0.2845849802371542, 'number': 119} | {'precision': 0.7813620071684588, 'recall': 0.8187793427230047, 'f1': 0.7996331957817515, 'number': 1065} | 0.7163 | 0.7842 | 0.7487 | 0.8132 |
| 0.3103 | 12.0 | 120 | 0.6505 | {'precision': 0.7311946902654868, 'recall': 0.8170580964153276, 'f1': 0.7717454757734967, 'number': 809} | {'precision': 0.2595419847328244, 'recall': 0.2857142857142857, 'f1': 0.27199999999999996, 'number': 119} | {'precision': 0.7859054415700267, 'recall': 0.8272300469483568, 'f1': 0.8060384263494967, 'number': 1065} | 0.7310 | 0.7908 | 0.7597 | 0.8160 |
| 0.3007 | 13.0 | 130 | 0.6494 | {'precision': 0.7219193020719739, 'recall': 0.8182941903584673, 'f1': 0.7670915411355737, 'number': 809} | {'precision': 0.27692307692307694, 'recall': 0.3025210084033613, 'f1': 0.2891566265060241, 'number': 119} | {'precision': 0.7930419268510259, 'recall': 0.8347417840375587, 'f1': 0.8133577310155535, 'number': 1065} | 0.7320 | 0.7963 | 0.7628 | 0.8161 |
| 0.2831 | 14.0 | 140 | 0.6593 | {'precision': 0.7202185792349727, 'recall': 0.8145859085290482, 'f1': 0.7645011600928074, 'number': 809} | {'precision': 0.273972602739726, 'recall': 0.33613445378151263, 'f1': 0.3018867924528302, 'number': 119} | {'precision': 0.7793468667255075, 'recall': 0.8291079812206573, 'f1': 0.8034576888080072, 'number': 1065} | 0.7211 | 0.7938 | 0.7557 | 0.8144 |
| 0.2913 | 15.0 | 150 | 0.6585 | {'precision': 0.7224043715846995, 'recall': 0.8170580964153276, 'f1': 0.7668213457076567, 'number': 809} | {'precision': 0.2777777777777778, 'recall': 0.33613445378151263, 'f1': 0.30418250950570347, 'number': 119} | {'precision': 0.7818343722172751, 'recall': 0.8244131455399061, 'f1': 0.8025594149908593, 'number': 1065} | 0.7236 | 0.7923 | 0.7564 | 0.8164 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0+cpu
- Datasets 2.1.0
- Tokenizers 0.13.3
|
mukeiZ/osusume
|
mukeiZ
| 2023-06-27T08:22:59Z | 0 | 1 | null |
[
"license:other",
"region:us"
] | null | 2023-05-05T08:35:55Z |
---
license: other
---
★Lora-pri_ver1
トリガーワードはprishe。無くても出るが再現性が上がる?
crownの有無で帽子の着脱可。
生成モデルにもよるが、衣装再現しないならepochは25くらいからprisheぽくなる。
epochは数字が増えていくにつれて再現度は高くなるが汎用性はなくなっていくかも。数字無しが最終。
プロンプトの強調やLoraの強度を変えれば衣装やポーズも応用が利く。LoRA Block Weightの利用も有効。
----------------------------------------
★Lora-PandU
2キャラ同時学習。prisheとulmiaで描き分け。
キャラが混じらないようにタグをそれぞれで分けてみたが成功したかどうか不明。
余計な要素が混じる場合はネガプロを利用するのも手。
epochは衣装再現性は低いが顔をみると35あたりがオススメかも。数字なしは層別適用を利用したほうがいい。
●タグ説明
・prishe
頭装備は crown
胸のリボンと石は red ribbon
耳は pointy ears 外すとヒュム耳になるかも
衣装は costume 外しても脱ぐわけではない
脚装備は short pants
足装備は brown footwear
サンプルタグ
prishe, costume, crown, white background, open mouth, red ribbon, pointy ears, brown footwear, hand on own hip, short pants
・ulmia
ulmia, uniform, solo, standing, full body, brown footwear, medium hair, brown eyes, ear piercing, circlet, orange hair, holding harp, hair ornament, jewelry
頭装備は circlet,hair ornament
耳は elf
衣装は uniform
脚装備は black leggings, gaiters
足装備は brown footwear
楽器は harp
サンプルタグ
ulmia, uniform, brown footwear, elf, white background, harp, hair ornament, black leggings, gaiters, circlet
|
memotirre90/Equipo16_gpt2-HotelSentiment
|
memotirre90
| 2023-06-27T08:08:56Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-27T07:01:15Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Equipo16_gpt2-HotelSentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Equipo16_gpt2-HotelSentiment
This model is a fine-tuned version of [finiteautomata/beto-sentiment-analysis](https://huggingface.co/finiteautomata/beto-sentiment-analysis) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6560
- Accuracy: 0.8994
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
MQ-playground/ppo-Huggy
|
MQ-playground
| 2023-06-27T08:08:48Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-27T08:08:37Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MQ-playground/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
NchuNLP/Agriculture-Classification
|
NchuNLP
| 2023-06-27T07:59:07Z | 113 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"zh",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-21T08:25:29Z |
---
language: zh
widget:
- text: "水稻生長的適宜溫度是多少?"
- text: "心臟病的病因?"
---
# Agriculture-Classification
This is a model used for classifying whether it is a agricultural question or not.
## Usage
### In Transformers
```python
from transformers import BertTokenizer, BertForSequenceClassification, pipeline
model_name = "NchuNLP/Agriculture-Classification"
tokenizer = BertTokenizer.from_pretrained(model_name)
model = BertForSequenceClassification.from_pretrained(model_name, num_labels=2)
# Get predictions
nlp = pipeline('text-classification', model=model, tokenizer=tokenizer)
query = "水稻生長的適宜溫度是多少?"
res = nlp(query)
```
## Authors
**Peng-Yi Lin:** gigilinqoo@gmail.com
**Yao-Chung Fan:** yfan@nchu.edu.tw
## About us
[中興大學自然語言處理實驗室](https://nlpnchu.org/)研究方向圍繞於深度學習技術在文字資料探勘 (Text Mining) 與自然語言處理 (Natural Language Processing) 方面之研究,目前實驗室成員的研究主題著重於機器閱讀理解 (Machine Reading Comprehension) 以及自然語言生成 (Natural Language Generation) 兩面向。
## More Information
<p>For more info about Nchu NLP Lab, visit our <strong><a href="https://demo.nlpnchu.org/">Lab Online Demo</a></strong> repo and <strong><a href="https://github.com/NCHU-NLP-Lab">GitHub</a></strong>.
|
andrewshi/bert-finetuned-squad
|
andrewshi
| 2023-06-27T07:55:41Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-27T00:53:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
The BERT fine-tuned SQuAD model is a version of the BERT (Bidirectional Encoder Representations from Transformers) model that has been fine-tuned on the Stanford Question Answering Dataset (SQuAD). It is designed to answer questions based on the context given. The SQuAD dataset is a collection of 100k+ questions and answers based on Wikipedia articles. Fine-tuning the model on this dataset allows it to provide precise answers to a wide array of questions based on a given context.
## Intended uses & limitations
This model is intended to be used for question-answering tasks. Given a question and a context (a piece of text containing the information to answer the question), the model will return the text span in the context that most likely contains the answer. This model is not intended to generate creative content, conduct sentiment analysis, or predict future events.
It's important to note that the model's accuracy is heavily dependent on the relevance and quality of the context it is provided. If the context does not contain the answer to the question, the model will still return a text span, which may not make sense. Additionally, the model may struggle with nuanced or ambiguous questions as it may not fully understand the subtleties of human language.
## Training and evaluation data
The model was trained on the SQuAD dataset, encompassing over 87,599 questions generated by crowd workers from various Wikipedia articles. The answers are text segments from the relevant reading passage. For evaluation, a distinct subset of the SQuAD, containing 10,570 instances, unseen by the model during training, was employed.
## Training procedure
The model was initially pretrained on a large corpus of text in an unsupervised manner, learning to predict masked tokens in a sentence. The pretraining was done on the bert-base-cased model, which was trained on English text in a case-sensitive manner. After this, the model was fine-tuned on the SQuAD dataset. During fine-tuning, the model was trained to predict the start and end positions of the answer in the context text given a question.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
- exact_match: 81.0406811731315
- f1: 88.65884513439593
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
with-madrid/h2ogpt-gm-oasst1-en-2048-open-llama-13b-GGML
|
with-madrid
| 2023-06-27T07:41:42Z | 0 | 0 |
transformers
|
[
"transformers",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"region:us"
] | null | 2023-06-27T07:11:31Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
license: apache-2.0
datasets:
- OpenAssistant/oasst1
---
# H20 Open LLama 13B fine-tuned on Open Assistant GGML
These files are GGML format model files for [H20 Open LLama 13B fine-tuned on Open Assistant GGML](https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-13b).
## Prompt template
```
prompt = "<|prompt|>How are you?</s><|answer|>"
```
## Provided files
-h2ogpt-gm-oasst1-en-2048-open-llama-13b_ggml_q4_0.bin
-h2ogpt-gm-oasst1-en-2048-open-llama-13b_ggml_q4_1.bin
# Original Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [openlm-research/open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b)
- Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28) personalized
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.30.0.dev0
pip install accelerate==0.19.0
pip install torch==2.0.1
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-13b",
torch_dtype="auto",
trust_remote_code=True,
use_fast=False,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?</s><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-13b",
use_fast=False,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-13b",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-13b" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?</s><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=False,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 5120, padding_idx=0)
(layers): ModuleList(
(0-39): 40 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=5120, out_features=5120, bias=False)
(k_proj): Linear(in_features=5120, out_features=5120, bias=False)
(v_proj): Linear(in_features=5120, out_features=5120, bias=False)
(o_proj): Linear(in_features=5120, out_features=5120, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=5120, out_features=13824, bias=False)
(down_proj): Linear(in_features=13824, out_features=5120, bias=False)
(up_proj): Linear(in_features=5120, out_features=13824, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=5120, out_features=32000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
haddadalwi/multi-qa-mpnet-base-dot-v1-finetuned-squad2-all
|
haddadalwi
| 2023-06-27T07:29:46Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mpnet",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-27T07:16:35Z |
---
tags:
- generated_from_trainer
model-index:
- name: multi-qa-mpnet-base-dot-v1-finetuned-squad2-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multi-qa-mpnet-base-dot-v1-finetuned-squad2-all
This model is a fine-tuned version of [sentence-transformers/multi-qa-mpnet-base-dot-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-dot-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8521 | 1.0 | 840 | 1.3531 |
| 1.2732 | 2.0 | 1680 | 1.0718 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
raafat3-16/text_summary
|
raafat3-16
| 2023-06-27T07:27:02Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-27T07:27:02Z |
---
license: creativeml-openrail-m
---
|
with-madrid/h2ogpt-gm-oasst1-en-2048-open-llama-7b-GGML
|
with-madrid
| 2023-06-27T07:26:04Z | 0 | 0 |
transformers
|
[
"transformers",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"region:us"
] | null | 2023-06-26T15:57:42Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
license: apache-2.0
datasets:
- OpenAssistant/oasst1
---
# H20 Open LLama 7B fine-tuned on Open Assistant GGML
These files are GGML format model files for [H20 Open LLama 7B fine-tuned on Open Assistant GGML](https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b).
## Prompt template
```
prompt = "<|prompt|>How are you?</s><|answer|>"
```
## Provided files
-ggml_model-q4_0.bin
-ggml_model-q4_1.bin
# Original Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [openlm-research/open_llama_7b](https://huggingface.co/openlm-research/open_llama_7b)
- Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28) personalized
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.30.0.dev0
pip install accelerate==0.19.0
pip install torch==2.0.1
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b",
torch_dtype="auto",
trust_remote_code=True,
use_fast=False,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?</s><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b",
use_fast=False,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?</s><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=False,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=4096, bias=False)
(v_proj): Linear(in_features=4096, out_features=4096, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=4096, out_features=11008, bias=False)
(down_proj): Linear(in_features=11008, out_features=4096, bias=False)
(up_proj): Linear(in_features=4096, out_features=11008, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
hw2942/Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-vix-sz50-v1
|
hw2942
| 2023-06-27T07:22:30Z | 87 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"longformer",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-27T07:11:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-vix-sz50-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-vix-sz50-v1
This model is a fine-tuned version of [IDEA-CCNL/Erlangshen-Longformer-110M](https://huggingface.co/IDEA-CCNL/Erlangshen-Longformer-110M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7690
- Accuracy: 0.5577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 19 | 0.7136 | 0.5 |
| No log | 2.0 | 38 | 0.7423 | 0.5 |
| No log | 3.0 | 57 | 0.8728 | 0.5 |
| No log | 4.0 | 76 | 0.6878 | 0.5 |
| No log | 5.0 | 95 | 0.7361 | 0.5385 |
| No log | 6.0 | 114 | 0.7651 | 0.5577 |
| No log | 7.0 | 133 | 0.8437 | 0.6346 |
| No log | 8.0 | 152 | 0.8604 | 0.5577 |
| No log | 9.0 | 171 | 0.7503 | 0.6154 |
| No log | 10.0 | 190 | 0.7690 | 0.5577 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
nojiyoon/nallm-polyglot-ko-1.3b
|
nojiyoon
| 2023-06-27T07:20:53Z | 4 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-20T08:43:19Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
rahuldshetty/open-llama-13b-open-instruct-8bit
|
rahuldshetty
| 2023-06-27T07:12:53Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:VMware/open-instruct-v1-oasst-dolly-hhrlhf",
"license:cc",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2023-06-27T07:03:53Z |
---
license: cc
datasets:
- VMware/open-instruct-v1-oasst-dolly-hhrlhf
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# rahuldshetty/open-llama-13b-open-instruct-8bit
This is a 8bit quantized version of VMware's Open-LLAMA-13B model.
Quantization is performed using [bitsandbytes](https://huggingface.co/docs/transformers/main_classes/quantization#load-a-large-model-in-8bit).
**Below details are taken from the official model repository**
# VMware/open-llama-13B-open-instruct
Instruction-tuned version of the fully trained Open LLama 13B model. The model is open for <b>COMMERCIAL USE</b>. <br>
<b> NOTE </b> : The model was trained using the Alpaca prompt template \
<b> NOTE </b> : Fast tokenizer results in incorrect encoding, set the ```use_fast = False``` parameter, when instantiating the tokenizer\
<b> NOTE </b> : The model might struggle with code as the tokenizer merges multiple spaces
## License
- <b>Commercially Viable </b>
- Instruction dataset, [VMware/open-instruct-v1-oasst-dolly-hhrlhf](https://huggingface.co/datasets/VMware/open-instruct-v1-oasst-dolly-hhrlhf) is under cc-by-sa-3.0
- Language Model, ([openlm-research/open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b)) is under apache-2.0
## Nomenclature
- Model : Open-llama
- Model Size: 13B parameters
- Dataset: Open-instruct-v1 (oasst,dolly, hhrlhf)
## Use in Transformers
```
import os
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = 'VMware/open-llama-13b-open-instruct'
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map='sequential')
prompt_template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
prompt = 'Explain in simple terms how the attention mechanism of a transformer model works'
inputt = prompt_template.format(instruction= prompt)
input_ids = tokenizer(inputt, return_tensors="pt").input_ids.to("cuda")
output1 = model.generate(input_ids, max_length=512)
input_length = input_ids.shape[1]
output1 = output1[:, input_length:]
output = tokenizer.decode(output1[0])
print(output)
```
## Finetuning details
The finetuning scripts will be available in our [RAIL Github Repository](https://github.com/vmware-labs/research-and-development-artificial-intelligence-lab/tree/main/instruction-tuning)
## Evaluation
<B>TODO</B>
|
neukg/TechGPT-7B
|
neukg
| 2023-06-27T07:08:59Z | 0 | 17 | null |
[
"pytorch",
"text2text-generation",
"zh",
"en",
"arxiv:2304.07854",
"license:gpl-3.0",
"region:us"
] |
text2text-generation
| 2023-06-23T10:10:11Z |
---
license: gpl-3.0
tags:
- text2text-generation
pipeline_tag: text2text-generation
language:
- zh
- en
---
# TechGPT: Technology-Oriented Generative Pretrained Transformer
Demo: [TechGPT-neukg](http://techgpt.neukg.com) <br>
Github: [neukg/TechGPT](https://github.com/neukg/TechGPT)
## 简介 Introduction
TechGPT是[“东北大学知识图谱研究组”](http://faculty.neu.edu.cn/renfeiliang)发布的垂直领域大语言模型。目前已开源全量微调的7B版本。<br>
TechGPT主要强化了如下三类任务:
- 以“知识图谱构建”为核心的关系三元组抽取等各类信息抽取任务
- 以“阅读理解”为核心的各类智能问答任务。
- 以“文本理解”为核心的关键词生成等各类序列生成任务。
在这三大自然语言处理核心能力之内,TechGPT还具备了对计算机科学、材料、机械、冶金、金融和航空航天等十余种垂直专业领域自然语言文本的处理能力。
目前,TechGPT通过提示和指令输入方式的不同,支持单轮对话和多轮对话,涵盖了领域术语抽取、命名实体识别、关系三元组抽取、文本关键词生成、标题生成摘要、摘要生成标题、文本领域识别、机器阅读理解、基础常识问答、基于上下文的知识问答、建议咨询类问答、文案生成、中英互译和简单代码生成等多项自然语言理解和生成能力。
---
TechGPT mainly strengthens the following three types of tasks:
- Various information extraction tasks such as relation triplet extraction with "knowledge graph construction" as the core
- Various intelligent question-and-answer tasks centered on "reading comprehension".
- Various sequence generation tasks such as keyword generation with "text understanding" as the core.
Within these three core natural language processing capabilities, TechGPT also has the ability to process natural language texts in more than ten vertical professional fields such as computer science, materials, machinery, metallurgy, finance, and aerospace.
TechGPT supports single-round dialogue and multi-round dialogue through different input methods of prompts and instructions, covering domain term extraction, named entity recognition, relation triplet extraction, text keyword generation, title generation abstract, abstract generation title, text field Recognition, machine reading comprehension, basic common sense question and answer, context-based knowledge question and answer, suggestion consultation question and answer, copy generation, CM-EN translation and simple code generation and many other natural language understanding and generation capabilities.
## 模型功能 Model Capabilities
模型在保留基本的对话和翻译能力之外,重点强化了基于**TechKG**的数据内容,因此,**TechGPT**更擅长在计算机科学等专业领域,完成实体及实体类型抽取、关系三元组抽取、标题扩写、摘要缩写、关键词生成和机器阅读理解等任务。
下面展示了TechGPT的几个核心能力。
1. **命名实体识别**
<table> <colgroup> <col style="width:60%"> <col style="width:40%"> </colgroup> <tr> <td>问题</td> <td>回复</td> </tr>
<tr> <td>抽取出下面文本的实体和实体类型:<br/>《女人树》,国产电视剧,由导演田迪执导,根据作家子页的原著改编,故事从1947年开始,跨越了解放战争和建国初期两大历史时期,展现了战斗在隐形战线上的人民英雄是如何不惧怕任何危险,不计较个人牺牲,甚至不顾人民内部的误解和生死裁决,都不暴露个人真实身份,至死不渝,与敌人周旋到底的英雄故事。</td> <td>实体名: 女人树, 实体类型:影视作品<br/>实体名: 田迪, 实体类型:人物<br/>实体名: 子页, 实体类型:人物<br/>实体名: 1947年, 实体类型:Date</td> </tr>
</table>
2. **关系三元组抽取**
<table> <colgroup> <col style="width:60%"> <col style="width:40%"> </colgroup> <tr> <td>问题</td> <td>回复</td> </tr>
<tr> <td>
请抽取出下面的关系三元组:<br>陈奕迅(Eason Chan),1974年7月27日出生于香港,中国香港流行乐男歌手、演员,毕业于英国金斯顿大学。 1995年因获得第14届新秀歌唱大赛冠军而正式出道。2005年发行的歌曲《浮夸》成为其歌唱生涯的代表作品之一。</td> <td><陈奕迅, 出生日期, 1974年><br><陈奕迅, 出生地, 香港><br>
<陈奕迅, 毕业院校, 英国金斯顿大学><br>
<陈奕迅, 国籍, 中国></td> </tr>
</table>
3. **标题扩写**
<table> <colgroup> <col style="width:60%"> <col style="width:40%"> </colgroup> <tr> <td>问题</td> <td>回复</td> </tr>
<tr> <td>请把下列标题扩写成摘要, 不少于100字: 基于视觉语言多模态的实体关系联合抽取的研究。</td> <td>本文介绍了一种基于视觉语言的多模态实体关系联合抽取出方法。该方法利用了图像和文本之间的语义联系,通过将图像中的物体与相应的文本描述进行匹配来识别实体之间的关系。同时,本文还提出了一种新的模型结构——深度双向编码器-解码器网络(BiDAN),用于实现这种联合提取任务。实验结果表明,所提出的方法在多个数据集上取得了较好的性能表现,证明了其有效性和实用性。</td> </tr>
</table>
4. **摘要缩写**
<table> <colgroup> <col style="width:60%"> <col style="width:40%"> </colgroup> <tr> <td>问题</td> <td>回复</td> </tr>
<tr> <td>请把下列摘要缩写成标题:本文介绍了一种基于视觉语言的多模态实体关系联合抽取出方法。该方法利用了图像和文本之间的语义联系,通过将图像中的物体与相应的文本描述进行匹配来识别实体之间的关系。同时,本文还提出了一种新的模型结构——深度双向编码器-解码器网络(BiDAF),用于实现这种联合提取任务。实验结果表明,所提出的方法在多个数据集上取得了较好的性能表现,证明了其有效性和实用性。</td> <td>基于视觉语言的多模态实体关系联合抽取研究。</td> </tr>
</table>
5. **关键词生成**
<table> <colgroup> <col style="width:60%"> <col style="width:40%"> </colgroup> <tr> <td>问题</td> <td>回复</td> </tr>
<tr> <td>请提取下面文本中的关键词。<br>本体是一种重要的知识库,其包含的丰富的语义信息可以为问答系统、信息检索、语义Web、信息抽取等领域的研究及相关应用提供重要的支持.因而,如何快速有效地构建本体具有非常重要的研究价值.研究者们分别从不同角度提出了大量有效地进行本体构建的方法.一般来讲,这些本体构建方法可以分为手工构建的方法和采用自动、半自动技术构建的方法.手工本体的方法往往需要本体专家参与到构建的整个过程,存在着构建成本高、效率低下、主观性强、移植不便等缺点,因而,此类方法正逐步被大量基于自动、半自动技术的本体构建方法所代替.自动、半自动构建的方法不需要(或仅需少量)人工参与,可以很方便地使用其它研究领域(如机器学习、自然语言处理等)的最新研究成果,也可以方便地使用不同数据源进行本体构建.</td> <td>本体,知识库,问答系统,信息检索,语义 Web,信息抽取,统计主导,语言分析主导,术语抽取,概念抽取,关系抽取,本体形成,评价方法,本体构建系统</td> </tr>
</table>
6. **机器阅读理解**
<table> <colgroup> <col style="width:60%"> <col style="width:40%"> </colgroup> <tr> <td>问题</td> <td>回复</td> </tr>
<tr> <td>请问这起交通事故是谁的责任居多?<br>小车和摩托车发生事故,在无红绿灯的十字路口,小停车看看左右,在觉得安全的情况下刹车慢慢以时速10公里左右的速度靠右行驶过路口,好没有出到十字路口正中时,被左边突然快速行驶过来的摩托车撞在车头前, 摩托车主摔到膝盖和檫伤脸部,请问这起交通事故是谁的责任居多。</td> <td>根据提供的信息,我们可以得出以下结论:<br>
1. 小车司机应该先观察交通信号灯是否亮起,如果没有信号灯,则应减速慢行通过交叉路口;<br>2. 在没有信号灯的交叉口中,小车的驾驶员应该在确保安全的前提下选择右侧车道行驶,以避免与来自左侧的车辆相撞;
3. 由于事故发生地点是十字路口,因此可以确定该事故属于道路交通事故范畴。<br>根据以上分析,我认为这起交通事故的主要责任在于摩托车驾驶人的行为。因为摩托车驾驶员在没有信号灯的情况下从左侧冲过来,导致与小车在十字路口相遇,造成了伤害。
然而,我们也需要考虑其他因素的影响,例如小车的速度、路面状况等。如果小车的车速过快或者路面湿滑,也可能对事故的发生产生影响。但是,由于提供的信息中没有提到这些细节,所以我们只能基于上述基本判断来回答这个问题。</td>
</tr>
</table>
## 准备工作 Preparation
由于LLaMA开源协议的限制,本模型仅限于研究和学习用途使用。请严格遵守LLaMA的使用要求和规范。为了确保这一点,我们需要确认您具有LLaMA的原始权重,并来自完全合法渠道。
---
According to the limitations of the LLaMA open source agreement, this model is limited to research and learning purposes. Please strictly abide by the usage requirements and specifications of LLaMA. To ensure this, we need to confirm that you have LLaMA's original weight and come from a completely legitimate source.
1. 你需要先下载模型到本地,并校验它们的检查和:
```
md5sum ./*
6b2b545ff7bacaeec6297198b4b745dd ./config.json.e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.enc
4ba9cc7f11df0422798971bc962fe076 ./generation_config.json.e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.enc
560b35ffd8a7a1f5b2d34a94a523659a ./pytorch_model.bin.e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.enc
85ae4132b11747b1609b8953c7086367 ./special_tokens_map.json.e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.enc
953dceae026a7aa88e062787c61ed9b0 ./tokenizer_config.json.e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.enc
e765a7740a908b5e166e95b6ee09b94b ./tokenizer.model.e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.enc
```
2. 根据这里→的[指定脚本](https://github.com/neukg/TechGPT/blob/main/utils/decrypt.py)解码模型权重:
```shell
for file in $(ls /path/encrypt_weight); do
python decrypt.py --type decrypt \
--input_file /path/encrypt_weight/"$file" \
--output_dir /path/to_finetuned_model \
--key_file /path/to_original_llama_7B/consolidated.00.pth
done
```
请将 `/path/encrypt_weight`替换为你下载的加密文件目录,把`/path/to_original_llama_7B`替换为你已有的合法LLaMA-7B权重目录,里面应该有原LLaMA权重文件`consolidated.00.pth`,将 `/path/to_finetuned_model` 替换为你要存放解码后文件的目录。
在解码完成后,应该可以得到以下文件:
```shell
./config.json
./generation_config.json
./pytorch_model.bin
./special_tokens_map.json
./tokenizer_config.json
./tokenizer.model
```
3. 请检查所有文件的检查和是否和下面给出的相同, 以保证解码出正确的文件:
```
md5sum ./*
6d5f0d60a6e36ebc1518624a46f5a717 ./config.json
2917a1cafb895cf57e746cfd7696bfe5 ./generation_config.json
0d322cb6bde34f7086791ce12fbf2bdc ./pytorch_model.bin
15f7a943faa91a794f38dd81a212cb01 ./special_tokens_map.json
08f6f621dba90b2a23c6f9f7af974621 ./tokenizer_config.json
6ffe559392973a92ea28032add2a8494 ./tokenizer.model
```
---
1. Git clone this model first.
```
md5sum ./*
6b2b545ff7bacaeec6297198b4b745dd ./config.json.e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.enc
4ba9cc7f11df0422798971bc962fe076 ./generation_config.json.e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.enc
560b35ffd8a7a1f5b2d34a94a523659a ./pytorch_model.bin.e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.enc
85ae4132b11747b1609b8953c7086367 ./special_tokens_map.json.e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.enc
953dceae026a7aa88e062787c61ed9b0 ./tokenizer_config.json.e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.enc
e765a7740a908b5e166e95b6ee09b94b ./tokenizer.model.e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.enc
```
2. Decrypt the files using the scripts in https://github.com/neukg/TechGPT/blob/main/utils/decrypt.py
You can use the following command in Bash.
Please replace `/path/to_encrypted` with the path where you stored your encrypted file,
replace `/path/to_original_llama_7B` with the path where you stored your original LLaMA-7B file `consolidated.00.pth`,
and replace `/path/to_finetuned_model` with the path where you want to save your final trained model.
```bash
for file in $(ls /path/encrypt_weight); do
python decrypt.py --type decrypt \
--input_file /path/encrypt_weight/"$file" \
--output_dir /path/to_finetuned_model \
--key_file /path/to_original_llama_7B/consolidated.00.pth
done
```
After executing the aforementioned command, you will obtain the following files.
```
./config.json
./generation_config.json
./pytorch_model.bin
./special_tokens_map.json
./tokenizer_config.json
./tokenizer.model
```
3. Check md5sum
You can verify the integrity of these files by performing an MD5 checksum to ensure their complete recovery.
Here are the MD5 checksums for the relevant files:
```
md5sum ./*
6d5f0d60a6e36ebc1518624a46f5a717 ./config.json
2917a1cafb895cf57e746cfd7696bfe5 ./generation_config.json
0d322cb6bde34f7086791ce12fbf2bdc ./pytorch_model.bin
15f7a943faa91a794f38dd81a212cb01 ./special_tokens_map.json
08f6f621dba90b2a23c6f9f7af974621 ./tokenizer_config.json
6ffe559392973a92ea28032add2a8494 ./tokenizer.model
```
## 使用方法 Model Usage
请注意在**训练**和**推理**阶段, 模型接收的输入格式是一致的:
Please note that the input should be formatted as follows in both **training** and **inference**.
``` python
Human: {input} \n\nAssistant:
```
请在使用TechGPT之前保证你已经安装好`transfomrers`和`torch`:
```shell
pip install transformers
pip install torch
```
- 注意,必须保证安装的 `transformers` 的版本中已经有 `LlamaForCausalLM` 。<br>
- Note that you must ensure that the installed version of `transformers` already has `LlamaForCausalLM`.
[Example:](https://github.com/neukg/TechGPT/blob/main/inference.py)
``` python
from transformers import LlamaTokenizer, AutoModelForCausalLM, AutoConfig, GenerationConfig
import torch
ckpt_path = '/workspace/BELLE-train/Version_raw/'
load_type = torch.float16
device = torch.device(0)
tokenizer = LlamaTokenizer.from_pretrained(ckpt_path)
tokenizer.pad_token_id = 0
tokenizer.bos_token_id = 1
tokenizer.eos_token_id = 2
tokenizer.padding_side = "left"
model_config = AutoConfig.from_pretrained(ckpt_path)
model = AutoModelForCausalLM.from_pretrained(ckpt_path, torch_dtype=load_type, config=model_config)
model.to(device)
model.eval()
prompt = "Human: 请把下列标题扩写成摘要, 不少于100字: 基于视觉语言多模态的实体关系联合抽取的研究 \n\nAssistant: "
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to(device)
generation_config = GenerationConfig(
temperature=0.1,
top_p=0.75,
top_k=40,
num_beams=1,
bos_token_id=1,
eos_token_id=2,
pad_token_id=0,
max_new_tokens=128,
min_new_tokens=10,
do_sample=True,
)
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
repetition_penalty=1.2,
)
output = generation_output.sequences[0]
output = tokenizer.decode(output, skip_special_tokens=True)
print(output)
```
输出:
```
Human: 请把下列标题扩写成摘要, 不少于100字: 基于视觉语言多模态的实体关系联合抽取的研究
Assistant: 文本:基于视觉语言的多模态的实体关系联合抽取是自然语言处理领域中的一个重要问题。该文提出了一种新的方法,利用深度学习技术来提取图像中的语义信息,并使用这些信息来识别和抽取图像中的人、物、地点等实体之间的关系。实验结果表明,该方法在多个基准数据集上取得了很好的性能表现,证明了其有效性和实用性。
```
## 免责声明 Disclaimers
该项目仅供学习交流使用,禁止用于商业用途。在使用过程中,使用者需认真阅读并遵守以下声明:
1. 本项目仅为大模型测试功能而生,使用者需自行承担风险和责任,如因使用不当而导致的任何损失或伤害,本项目概不负责。
2. 本项目中出现的第三方链接或库仅为提供便利而存在,其内容和观点与本项目无关。使用者在使用时需自行辨别,本项目不承担任何连带责任;
3. 使用者在测试和使用模型时,应遵守相关法律法规,如因使用不当而造成损失的,本项目不承担责任,使用者应自行承担;若项目出现任何错误,请向我方反馈,以助于我们及时修复;
4. 本模型中出现的任何违反法律法规或公序良俗的回答,均不代表本项目观点和立场,我们将不断完善模型回答以使其更符合社会伦理和道德规范。
使用本项目即表示您已经仔细阅读、理解并同意遵守以上免责声明。本项目保留在不预先通知任何人的情况下修改本声明的权利。
---
This project is for learning exchange only, commercial use is prohibited. During use, users should carefully read and abide by the following statements:
1. This project is only for the test function of the large model, and the user shall bear the risks and responsibilities. This project shall not be responsible for any loss or injury caused by improper use.
2. The third-party links or libraries appearing in this project exist only for convenience, and their content and opinions have nothing to do with this project. Users need to identify themselves when using it, and this project does not bear any joint and several liabilities;
3. Users should abide by the relevant laws and regulations when testing and using the model. If the loss is caused by improper use, the project will not bear the responsibility, and the user should bear it by themselves; if there is any error in the project, please feedback to us. to help us fix it in a timely manner;
4. Any answers in this model that violate laws and regulations or public order and good customs do not represent the views and positions of this project. We will continue to improve the model answers to make them more in line with social ethics and moral norms.
Using this project means that you have carefully read, understood and agreed to abide by the above disclaimer. The project reserves the right to modify this statement without prior notice to anyone.
## Citation
如果使用本项目的代码、数据或模型,请引用本项目。
Please cite our project when using our code, data or model.
```
@misc{TechGPT,
author = {Feiliang Ren, Ning An, Qi Ma, Hei Lei},
title = {TechGPT: Technology-Oriented Generative Pretrained Transformer},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/neukg/TechGPT}},
}
```
**我们对BELLE的工作表示衷心的感谢!**
**Our sincere thanks to BELLE for their work!**
```
@misc{ji2023better,
title={Towards Better Instruction Following Language Models for Chinese: Investigating the Impact of Training Data and Evaluation},
author={Yunjie Ji and Yan Gong and Yong Deng and Yiping Peng and Qiang Niu and Baochang Ma and Xiangang Li},
year={2023},
eprint={2304.07854},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{BELLE,
author = {Yunjie Ji, Yong Deng, Yan Gong, Yiping Peng, Qiang Niu, Baochang Ma, Xiangang Li},
title = {BELLE: Be Everyone's Large Language model Engine},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/LianjiaTech/BELLE}},
}
```
|
limcheekin/mpt-7b-storywriter-ct2
|
limcheekin
| 2023-06-27T07:04:45Z | 4 | 0 |
transformers
|
[
"transformers",
"ctranslate2",
"mpt-7b-storywriter",
"quantization",
"int8",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-01T09:09:40Z |
---
license: apache-2.0
language:
- en
tags:
- ctranslate2
- mpt-7b-storywriter
- quantization
- int8
---
# Model Card for MPT-7B-StoryWriter-65k+ Q8
The model is quantized version of the [mosaicml/mpt-7b-storywriter](https://huggingface.co/mosaicml/mpt-7b-storywriter) with int8 quantization.
## Model Details
### Model Description
The model being quantized using [CTranslate2](https://opennmt.net/CTranslate2/) with the following command:
```
ct2-transformers-converter --model mosaicml/mpt-7b-storywriter --output_dir mosaicml/mpt-7b-storywriter-ct2 --copy_files generation_config.json tokenizer.json tokenizer_config.json special_tokens_map.json --quantization int8 --force --low_cpu_mem_usage --trust_remote_code
```
If you want to perform the quantization yourself, you need to install the following dependencies:
```
pip install -qU ctranslate2 transformers[torch] accelerate einops
```
- **Shared by:** Lim Chee Kin
- **License:** Apache 2.0
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import ctranslate2
import transformers
generator = ctranslate2.Generator("limcheekin/mpt-7b-storywriter-ct2")
tokenizer = transformers.AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
prompt = "Long long time ago, "
tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt))
results = generator.generate_batch([tokens], max_length=256, sampling_topk=10)
text = tokenizer.decode(results[0].sequences_ids[0])
```
The code is taken from https://opennmt.net/CTranslate2/guides/transformers.html#mpt.
The key method of the code above is `generate_batch`, you can find out [its supported parameters here](https://opennmt.net/CTranslate2/python/ctranslate2.Generator.html#ctranslate2.Generator.generate_batch).
|
TurkuNLP/gpt3-finnish-xl
|
TurkuNLP
| 2023-06-27T06:51:26Z | 164 | 7 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"feature-extraction",
"text-generation",
"fi",
"arxiv:2203.02155",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-02-15T10:49:56Z |
---
language:
- fi
pipeline_tag: text-generation
license: apache-2.0
---
Generative Pretrained Transformer with 1.5B parameteres for Finnish.
TurkuNLP Finnish GPT-3-models are a model family of pretrained monolingual GPT-style language models that are based on BLOOM-architecture.
Note that the models are pure language models, meaning that they are not [instruction finetuned](https://arxiv.org/abs/2203.02155) for dialogue
or answering questions.
These models are intended to be used as foundational models that can be e.g. instruction finetuned to serve as modern chat-models.
All models are trained for 300B tokens.
**Parameters**
| Model | Layers | Dim | Heads | Params |
|--------|--------|------|-------|--------|
| Small | 12 | 768 | 12 | 186M |
| Medium | 24 | 1024 | 16 | 437M |
| Large | 24 | 1536 | 16 | 881M |
| XL | 24 | 2064 | 24 | 1.5B |
| ”3B” | 32 | 2560 | 32 | 2.8B |
| ”8B” | 32 | 4096 | 32 | 7.5B |
| "13B" | 40 | 5120 | 40 | 13.3B |
**Datasets**
We used a combination of multiple Finnish resources.
* Finnish Internet Parsebank https://turkunlp.org/finnish_nlp.html
mC4 multilingual colossal, cleaned Common Crawl https://huggingface.co/datasets/mc4
* Common Crawl Finnish https://TODO
* Finnish Wikipedia https://fi.wikipedia.org/wiki
* Lönnrot Projekti Lönnrot http://www.lonnrot.net/
* ePub National library ”epub” collection
* National library ”lehdet” collection
* Suomi24 The Suomi 24 Corpus 2001-2020 http://urn.fi/urn:nbn:fi:lb-2021101527
* Reddit r/Suomi submissions and comments https://www.reddit.com/r/Suomi
* STT Finnish News Agency Archive 1992-2018 http://urn.fi/urn:nbn:fi:lb-2019041501
* Yle Finnish News Archive 2011-2018 http://urn.fi/urn:nbn:fi:lb-2017070501
* Yle Finnish News Archive 2019-2020 http://urn.fi/urn:nbn:fi:lb-2021050401
* Yle News Archive Easy-to-read Finnish 2011-2018 http://urn.fi/urn:nbn:fi:lb-2019050901
* Yle News Archive Easy-to-read Finnish 2019-2020 http://urn.fi/urn:nbn:fi:lb-2021050701
* ROOTS TODO
**Sampling ratios**
|Dataset | Chars | Ratio | Weight | W.Ratio |
|----------|--------|---------|--------|---------|
|Parsebank | 35.0B | 16.9\% | 1.5 | 22.7\%|
|mC4-Fi | 46.3B | 22.4\% | 1.0 | 20.0\%|
|CC-Fi | 79.6B | 38.5\% | 1.0 | 34.4\%|
|Fiwiki | 0.8B | 0.4\% | 3.0 | 1.0\%|
|Lönnrot | 0.8B | 0.4\% | 3.0 | 1.0\%|
|Yle | 1.6B | 0.8\% | 2.0 | 1.4\%|
|STT | 2.2B | 1.1\% | 2.0 | 1.9\%|
|ePub | 13.5B | 6.5\% | 1.0 | 5.8\%|
|Lehdet | 5.8B | 2.8\% | 1.0 | 2.5\%|
|Suomi24 | 20.6B | 9.9\% | 1.0 | 8.9\%|
|Reddit-Fi | 0.7B | 0.4\% | 1.0 | 0.3\%|
|**TOTAL** | **207.0B** | **100.0\%** | **N/A** | **100.0\%** |
More documentation and a paper coming soon.
|
TurkuNLP/gpt3-finnish-8B
|
TurkuNLP
| 2023-06-27T06:50:47Z | 44 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"feature-extraction",
"text-generation",
"fi",
"arxiv:2203.02155",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-02-16T08:04:25Z |
---
language:
- fi
pipeline_tag: text-generation
license: apache-2.0
---
Generative Pretrained Transformer with 8B parameteres for Finnish.
TurkuNLP Finnish GPT-3-models are a model family of pretrained monolingual GPT-style language models that are based on BLOOM-architecture.
Note that the models are pure language models, meaning that they are not [instruction finetuned](https://arxiv.org/abs/2203.02155) for dialogue
or answering questions.
These models are intended to be used as foundational models that can be e.g. instruction finetuned to serve as modern chat-models.
All models are trained for 300B tokens.
**Parameters**
| Model | Layers | Dim | Heads | Params |
|--------|--------|------|-------|--------|
| Small | 12 | 768 | 12 | 186M |
| Medium | 24 | 1024 | 16 | 437M |
| Large | 24 | 1536 | 16 | 881M |
| XL | 24 | 2064 | 24 | 1.5B |
| ”3B” | 32 | 2560 | 32 | 2.8B |
| ”8B” | 32 | 4096 | 32 | 7.5B |
| "13B" | 40 | 5120 | 40 | 13.3B |
**Datasets**
We used a combination of multiple Finnish resources.
* Finnish Internet Parsebank https://turkunlp.org/finnish_nlp.html
mC4 multilingual colossal, cleaned Common Crawl https://huggingface.co/datasets/mc4
* Common Crawl Finnish https://TODO
* Finnish Wikipedia https://fi.wikipedia.org/wiki
* Lönnrot Projekti Lönnrot http://www.lonnrot.net/
* ePub National library ”epub” collection
* National library ”lehdet” collection
* Suomi24 The Suomi 24 Corpus 2001-2020 http://urn.fi/urn:nbn:fi:lb-2021101527
* Reddit r/Suomi submissions and comments https://www.reddit.com/r/Suomi
* STT Finnish News Agency Archive 1992-2018 http://urn.fi/urn:nbn:fi:lb-2019041501
* Yle Finnish News Archive 2011-2018 http://urn.fi/urn:nbn:fi:lb-2017070501
* Yle Finnish News Archive 2019-2020 http://urn.fi/urn:nbn:fi:lb-2021050401
* Yle News Archive Easy-to-read Finnish 2011-2018 http://urn.fi/urn:nbn:fi:lb-2019050901
* Yle News Archive Easy-to-read Finnish 2019-2020 http://urn.fi/urn:nbn:fi:lb-2021050701
* ROOTS TODO
**Sampling ratios**
|Dataset | Chars | Ratio | Weight | W.Ratio |
|----------|--------|---------|--------|---------|
|Parsebank | 35.0B | 16.9\% | 1.5 | 22.7\%|
|mC4-Fi | 46.3B | 22.4\% | 1.0 | 20.0\%|
|CC-Fi | 79.6B | 38.5\% | 1.0 | 34.4\%|
|Fiwiki | 0.8B | 0.4\% | 3.0 | 1.0\%|
|Lönnrot | 0.8B | 0.4\% | 3.0 | 1.0\%|
|Yle | 1.6B | 0.8\% | 2.0 | 1.4\%|
|STT | 2.2B | 1.1\% | 2.0 | 1.9\%|
|ePub | 13.5B | 6.5\% | 1.0 | 5.8\%|
|Lehdet | 5.8B | 2.8\% | 1.0 | 2.5\%|
|Suomi24 | 20.6B | 9.9\% | 1.0 | 8.9\%|
|Reddit-Fi | 0.7B | 0.4\% | 1.0 | 0.3\%|
|**TOTAL** | **207.0B** | **100.0\%** | **N/A** | **100.0\%** |
More documentation and a paper coming soon.
|
TurkuNLP/gpt3-finnish-small
|
TurkuNLP
| 2023-06-27T06:48:35Z | 3,087 | 12 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"feature-extraction",
"text-generation",
"fi",
"arxiv:2203.02155",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-02-15T10:08:16Z |
---
language:
- fi
pipeline_tag: text-generation
license: apache-2.0
---
Generative Pretrained Transformer with 186M parameteres for Finnish.
TurkuNLP Finnish GPT-3-models are a model family of pretrained monolingual GPT-style language models that are based on BLOOM-architecture.
Note that the models are pure language models, meaning that they are not [instruction finetuned](https://arxiv.org/abs/2203.02155) for dialogue
or answering questions.
These models are intended to be used as foundational models that can be e.g. instruction finetuned to serve as modern chat-models.
All models are trained for 300B tokens.
**Parameters**
| Model | Layers | Dim | Heads | Params |
|--------|--------|------|-------|--------|
| Small | 12 | 768 | 12 | 186M |
| Medium | 24 | 1024 | 16 | 437M |
| Large | 24 | 1536 | 16 | 881M |
| XL | 24 | 2064 | 24 | 1.5B |
| ”3B” | 32 | 2560 | 32 | 2.8B |
| ”8B” | 32 | 4096 | 32 | 7.5B |
| "13B" | 40 | 5120 | 40 | 13.3B |
**Datasets**
We used a combination of multiple Finnish resources.
* Finnish Internet Parsebank https://turkunlp.org/finnish_nlp.html
mC4 multilingual colossal, cleaned Common Crawl https://huggingface.co/datasets/mc4
* Common Crawl Finnish https://TODO
* Finnish Wikipedia https://fi.wikipedia.org/wiki
* Lönnrot Projekti Lönnrot http://www.lonnrot.net/
* ePub National library ”epub” collection
* National library ”lehdet” collection
* Suomi24 The Suomi 24 Corpus 2001-2020 http://urn.fi/urn:nbn:fi:lb-2021101527
* Reddit r/Suomi submissions and comments https://www.reddit.com/r/Suomi
* STT Finnish News Agency Archive 1992-2018 http://urn.fi/urn:nbn:fi:lb-2019041501
* Yle Finnish News Archive 2011-2018 http://urn.fi/urn:nbn:fi:lb-2017070501
* Yle Finnish News Archive 2019-2020 http://urn.fi/urn:nbn:fi:lb-2021050401
* Yle News Archive Easy-to-read Finnish 2011-2018 http://urn.fi/urn:nbn:fi:lb-2019050901
* Yle News Archive Easy-to-read Finnish 2019-2020 http://urn.fi/urn:nbn:fi:lb-2021050701
* ROOTS TODO
**Sampling ratios**
|Dataset | Chars | Ratio | Weight | W.Ratio |
|----------|--------|---------|--------|---------|
|Parsebank | 35.0B | 16.9\% | 1.5 | 22.7\%|
|mC4-Fi | 46.3B | 22.4\% | 1.0 | 20.0\%|
|CC-Fi | 79.6B | 38.5\% | 1.0 | 34.4\%|
|Fiwiki | 0.8B | 0.4\% | 3.0 | 1.0\%|
|Lönnrot | 0.8B | 0.4\% | 3.0 | 1.0\%|
|Yle | 1.6B | 0.8\% | 2.0 | 1.4\%|
|STT | 2.2B | 1.1\% | 2.0 | 1.9\%|
|ePub | 13.5B | 6.5\% | 1.0 | 5.8\%|
|Lehdet | 5.8B | 2.8\% | 1.0 | 2.5\%|
|Suomi24 | 20.6B | 9.9\% | 1.0 | 8.9\%|
|Reddit-Fi | 0.7B | 0.4\% | 1.0 | 0.3\%|
|**TOTAL** | **207.0B** | **100.0\%** | **N/A** | **100.0\%** |
More documentation and a paper coming soon.
|
joohwan/xlmr
|
joohwan
| 2023-06-27T06:30:36Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-27T06:08:00Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlmr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6138
- Accuracy: 0.9163
- F1: 0.9153
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3962 | 1.0 | 450 | 0.3872 | 0.9011 | 0.9005 |
| 0.0584 | 2.0 | 900 | 0.4941 | 0.9180 | 0.9171 |
| 0.0284 | 3.0 | 1350 | 0.6192 | 0.9138 | 0.9127 |
| 0.0144 | 4.0 | 1800 | 0.5967 | 0.9224 | 0.9214 |
| 0.0103 | 5.0 | 2250 | 0.6138 | 0.9163 | 0.9153 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
kejolong/bayonetta
|
kejolong
| 2023-06-27T06:25:57Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-27T06:19:24Z |
---
license: creativeml-openrail-m
---
|
mandliya/default-taxi-v3
|
mandliya
| 2023-06-27T06:08:55Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T06:08:54Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: default-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="mandliya/default-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Shridipta-06/rl_course_vizdoom_health_gathering_supreme
|
Shridipta-06
| 2023-06-27T06:01:26Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T02:20:28Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.81 +/- 3.54
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Shridipta-06/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
97jmlr/lander2
|
97jmlr
| 2023-06-27T05:51:34Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T23:33:06Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -220.16 +/- 118.05
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 1000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': '97jmlr/lander2'
'batch_size': 512
'minibatch_size': 128}
```
|
xyntopia/tb_classifier
|
xyntopia
| 2023-06-27T05:43:03Z | 164 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-12T05:51:23Z |
---
license: mit
---
This model classifies smaller textblocks. Right now it is mainyl used to identify addresses.
|
S3S3/Reinforce-CartPole-v1
|
S3S3
| 2023-06-27T05:34:52Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T05:34:43Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
xunnylee/enahappy
|
xunnylee
| 2023-06-27T05:24:52Z | 0 | 1 | null |
[
"license:openrail",
"region:us"
] | null | 2023-06-27T05:23:03Z |
---
license: openrail
---
hi! thank you for using my model!
please credit me @xunnylee on discord/youtube if you use it!
enjoy! :D
|
sid/Reinforce-Pixelcopter-PLE-v0
|
sid
| 2023-06-27T04:47:22Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T21:27:57Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 42.70 +/- 24.24
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playiscores = reinforce(pixelcopter_policy,
pixelcopter_optimizer,
pixelcopter_hyperparameters["n_training_episodes"],
pixelcopter_hyperparameters["max_t"],
pixelcopter_hyperparameters["gamma"],
1000)ng **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AlgorithmicResearchGroup/flan-t5-xxl-arxiv-cs-ml-closed-qa
|
AlgorithmicResearchGroup
| 2023-06-27T04:40:26Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv",
"summarization",
"en",
"dataset:ArtifactAI/arxiv-cs-ml-instruct-tune-50k",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-06-26T14:17:24Z |
---
license: apache-2.0
language:
- en
pipeline_tag: summarization
widget:
- text: What is an LSTM?
example_title: Question Answering
tags:
- arxiv
datasets:
- ArtifactAI/arxiv-cs-ml-instruct-tune-50k
---
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Citation](#citation)
# TL;DR
This is a FLAN-T5-XXL model trained on [ArtifactAI/arxiv-cs-ml-instruct-50k](https://huggingface.co/datasets/ArtifactAI/arxiv-cs-ml-instruct-50k). This model is for research purposes only and ***should not be used in production settings***.
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=flan-t5)
# Usage
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
```python
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Load peft config for pre-trained checkpoint etc.
peft_model_id = "ArtifactAI/flant5-xxl-math-full-training-run-one"
config = PeftConfig.from_pretrained(peft_model_id)
# load base LLM model and tokenizer
model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path, load_in_8bit=True, device_map={"":0})
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id, device_map={"":0})
model.eval()
input_ids = tokenizer("What is the peak phase of T-eV?", return_tensors="pt", truncation=True).input_ids.cuda()
# with torch.inference_mode():
outputs = model.generate(input_ids=input_ids, max_new_tokens=1000, do_sample=True, top_p=0.9)
print(f"summary: {tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]}")
```
## Training Data
The model was trained on [ArtifactAI/arxiv-math-instruct-50k](https://huggingface.co/datasets/ArtifactAI/arxiv-cs-ml-instruct-50k), a dataset of question/answer pairs. Questions are generated using the t5-base model, while the answers are generated using the GPT-3.5-turbo model.
# Citation
```
@misc{flan-t5-xxl-arxiv-cs-ml-zeroshot-qa,
title={flan-t5-xxl-arxiv-cs-ml-zeroshot-qa},
author={Matthew Kenney},
year={2023}
}
```
|
Louth/ppo-LunarLander-v2
|
Louth
| 2023-06-27T03:42:30Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T03:42:11Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.08 +/- 10.07
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
adooo/bigmodels
|
adooo
| 2023-06-27T03:20:33Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-05-04T08:37:52Z |
---
license: openrail
---
<img src="https://huggingface.co/adooo/bigmodels/resolve/main/NSX-1-EzBackground-pruned.png">NSX-1-EzBackground-pruned<br>
|
hopkins/bert-wiki-choked-5
|
hopkins
| 2023-06-27T03:04:35Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:generator",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-27T03:03:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: bert-wiki-choked-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-wiki-choked-5
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0 | 1.0 | 1 | nan |
| 5.0259 | 2.0 | 2 | nan |
| 0.0 | 3.0 | 3 | nan |
| 0.0 | 4.0 | 4 | nan |
| 20.9017 | 5.0 | 5 | nan |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/bert-wiki-choked-4
|
hopkins
| 2023-06-27T03:02:13Z | 58 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-27T03:01:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: bert-wiki-choked-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-wiki-choked-4
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0 | 1.0 | 1 | nan |
| 5.0605 | 2.0 | 2 | nan |
| 0.0 | 3.0 | 3 | nan |
| 0.0 | 4.0 | 4 | nan |
| 20.3063 | 5.0 | 5 | nan |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Fizzzk/1
|
Fizzzk
| 2023-06-27T03:00:08Z | 0 | 0 | null |
[
"license:cdla-sharing-1.0",
"region:us"
] | null | 2023-06-27T03:00:08Z |
---
license: cdla-sharing-1.0
---
|
hopkins/bert-wiki-choked
|
hopkins
| 2023-06-27T02:49:35Z | 56 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-27T02:39:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: bert-wiki-choked
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-wiki-choked
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
sdpkjc/CartPole-v1-dqn-seed1
|
sdpkjc
| 2023-06-27T02:45:57Z | 0 | 0 |
cleanrl
|
[
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T06:50:47Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 47.80 +/- 11.83
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **CartPole-v1**
This is a trained model of a DQN agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/dqn.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[dqn]"
python -m cleanrl_utils.enjoy --exp-name dqn --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/CartPole-v1-dqn-seed1/raw/main/dqn.py
curl -OL https://huggingface.co/sdpkjc/CartPole-v1-dqn-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/CartPole-v1-dqn-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqn.py --total-timesteps 1000 --learning-starts 250 --save-model --hf-entity sdpkjc --upload-model
```
# Hyperparameters
```python
{'batch_size': 128,
'buffer_size': 10000,
'capture_video': False,
'cuda': True,
'end_e': 0.05,
'env_id': 'CartPole-v1',
'exp_name': 'dqn',
'exploration_fraction': 0.5,
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_rate': 0.00025,
'learning_starts': 250,
'num_envs': 1,
'save_model': True,
'seed': 1,
'start_e': 1,
'target_network_frequency': 500,
'tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 1000,
'track': False,
'train_frequency': 10,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
trojblue/blip2-opt-6.7b-coco-fp16
|
trojblue
| 2023-06-27T02:29:00Z | 54 | 1 |
transformers
|
[
"transformers",
"pytorch",
"blip-2",
"visual-question-answering",
"vision",
"image-to-text",
"image-captioning",
"en",
"arxiv:2301.12597",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2023-06-27T01:49:05Z |
---
language: en
license: mit
tags:
- vision
- image-to-text
- image-captioning
- visual-question-answering
pipeline_tag: image-to-text
---
# BLIP-2, OPT-6.7b, Fine-tuned on COCO - Unofficial FP16 Version
This repository contains an unofficial version of the BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b), which has been fine-tuned on COCO and converted to FP16 for reduced model size and memory footprint.
The original model, BLIP-2, was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
For a comprehensive understanding of the model, its description, intended uses, limitations, and instructions on usage with different hardware and precision settings, please refer to the [official model card](https://huggingface.co/Salesforce/blip2-opt-6.7b-coco).
## Unofficial FP16 Version
This version of the BLIP-2 model has been converted to use FP16 precision, which effectively reduces the model size and memory requirements. The conversion to FP16 can potentially accelerate the model's computation time on hardware with FP16 support, although it might slightly affect the model's performance due to reduced numerical precision.
This unofficial FP16 version is ideal for situations where storage, memory, or computational resources are limited.
Please note, this is an **unofficial** repository and not maintained or endorsed by the original authors of the model. The FP16 conversion was conducted independently and any potential issues, limitations or discrepancies with the original model are not the responsibility of the original authors.
### How to use
The usage of this FP16 version of the model is similar to the original model. For specific code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
Please ensure to test the performance and accuracy of this FP16 model thoroughly in your specific use-case to confirm it meets your needs.
This version can be used for tasks like:
- image captioning
- visual question answering (VQA)
- chat-like conversations by feeding the image and the previous conversation as a prompt to the model
*Disclaimer: This is an unofficial version of the model and any potential issues or discrepancies from the official model are not the responsibility of the original authors.*
|
hopkins/bert-wiki
|
hopkins
| 2023-06-27T02:17:16Z | 57 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-26T17:01:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: bert-wiki
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-wiki
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dangdana/bert-base-banking77-pt2
|
dangdana
| 2023-06-27T02:15:39Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:banking77",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-23T03:06:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- banking77
metrics:
- f1
model-index:
- name: bert-base-banking77-pt2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: banking77
type: banking77
config: default
split: test
args: default
metrics:
- name: F1
type: f1
value: 0.9288880368785732
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-banking77-pt2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3126
- F1: 0.9289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1538 | 1.0 | 626 | 0.8261 | 0.8491 |
| 0.4114 | 2.0 | 1252 | 0.3838 | 0.9187 |
| 0.1957 | 3.0 | 1878 | 0.3126 | 0.9289 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.3
|
tmpupload/superhot-30b-8k-no-rlhf-test-128g-GPTQ
|
tmpupload
| 2023-06-27T02:14:50Z | 10 | 3 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T22:44:28Z |
---
license: other
---
# superhot-30b-8k-4bit-128g-safetensors
**Note: Maximum sequence length (max_seq_len) and compression factor (compress_pos_emb) need to be set to 8192 (or lower) and 4.**
Merged base LLaMA and LoRA with this:
https://github.com/tloen/alpaca-lora
Base LLaMA 30B:
https://huggingface.co/huggyllama/llama-30b
SuperHOT 30B 8k no-rlhf-test LoRA:
https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test
``` sh
BASE_MODEL=huggyllama_llama-30b LORA=kaiokendev_superhot-30b-8k-no-rlhf-test python export_hf_checkpoint.py
```
Quantized with AutoGPTQ:
https://github.com/PanQiWei/AutoGPTQ
``` sh
python quant_with_alpaca.py --pretrained_model_dir superhot-30b-8k-safetensors --quantized_model_dir superhot-30b-8k-4bit-128g-safetensors --bits 4 --group_size 128 --desc_act --num_samples 256 --save_and_reload
```
Perplexity:
```
CUDA_VISIBLE_DEVICES=0 python test_benchmark_inference.py \
-d /workspace/models/superhot-30b-8k-4bit-128g-safetensors \
-ppl \
-ppl_ds datasets/wikitext2.txt \
-l 8192 \
-cpe 4 \
-ppl_cn 40 \
-ppl_cs 8192 \
-ppl_ct 8192
-- Perplexity:
-- - Dataset: datasets/wikitext2.txt
-- - Chunks: 40
-- - Chunk size: 8192 -> 8192
-- - Chunk overlap: 0
-- - Min. chunk size: 50
-- - Key: text
-- Tokenizer: /workspace/models/superhot-30b-8k-4bit-128g-safetensors/tokenizer.model
-- Model config: /workspace/models/superhot-30b-8k-4bit-128g-safetensors/config.json
-- Model: /workspace/models/superhot-30b-8k-4bit-128g-safetensors/4bit-128g.safetensors
-- Sequence length: 8192
-- RoPE compression factor: 4.0
-- Tuning:
-- --matmul_recons_thd: 8
-- --fused_mlp_thd: 2
-- --sdp_thd: 8
-- Options: ['perplexity']
** Time, Load model: 4.31 seconds
** Time, Load tokenizer: 0.01 seconds
-- Groupsize (inferred): 128
-- Act-order (inferred): yes
** VRAM, Model: [cuda:0] 17,043.70 MB
-- Loading dataset...
-- Testing 40 chunks....
** Perplexity: 4.6612
```
|
tmpupload/superhot-13b-8k-no-rlhf-test-GPTQ
|
tmpupload
| 2023-06-27T02:14:34Z | 6 | 1 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-26T12:57:16Z |
---
license: other
---
# superhot-13b-8k-4bit--1g-safetensors
**Note: Maximum sequence length (max_seq_len) and compression factor (compress_pos_emb) need to be set to 8192 (or lower) and 4.**
Merged base LLaMA and LoRA with this:
https://github.com/tloen/alpaca-lora
Base LLaMA 13B:
https://huggingface.co/huggyllama/llama-13b
SuperHOT 13B 8k no-rlhf-test LoRA:
https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test
``` sh
BASE_MODEL=huggyllama_llama-13b LORA=kaiokendev_superhot-13b-8k-no-rlhf-test python export_hf_checkpoint.py
```
Quantized with AutoGPTQ:
https://github.com/PanQiWei/AutoGPTQ
``` sh
python quant_with_alpaca.py --pretrained_model_dir superhot-13b-8k-safetensors --quantized_model_dir superhot-13b-8k-no-rlhf-test-GPTQ --bits 4 --group_size -1 --desc_act --num_samples 256 --save_and_reload
```
Perplexity:
```
CUDA_VISIBLE_DEVICES=0 python test_benchmark_inference.py \
-d /workspace/models/superhot-13b-8k-no-rlhf-test-GPTQ \
-ppl \
-ppl_ds datasets/wikitext2.txt \
-l 8192 \
-cpe 4 \
-ppl_cn 40 \
-ppl_cs 8192 \
-ppl_ct 8192
-- Perplexity:
-- - Dataset: datasets/wikitext2.txt
-- - Chunks: 40
-- - Chunk size: 8192 -> 8192
-- - Chunk overlap: 0
-- - Min. chunk size: 50
-- - Key: text
-- Tokenizer: /workspace/models/superhot-13b-8k-no-rlhf-test-GPTQ/tokenizer.model
-- Model config: /workspace/models/superhot-13b-8k-no-rlhf-test-GPTQ/config.json
-- Model: /workspace/models/superhot-13b-8k-no-rlhf-test-GPTQ/4bit.safetensors
-- Sequence length: 8192
-- RoPE compression factor: 4.0
-- Tuning:
-- --matmul_recons_thd: 8
-- --fused_mlp_thd: 2
-- --sdp_thd: 8
-- Options: ['perplexity']
** Time, Load model: 3.58 seconds
** Time, Load tokenizer: 0.01 seconds
-- Groupsize (inferred): None
-- Act-order (inferred): no
!! Model has empty group index (discarded)
** VRAM, Model: [cuda:0] 6,754.74 MB
-- Loading dataset...
-- Testing 40 chunks....
** Perplexity: 5.7766
```
|
tmpupload/superhot-13b-8k-no-rlhf-test-32g-GPTQ
|
tmpupload
| 2023-06-27T02:14:14Z | 6 | 1 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-26T14:52:21Z |
---
license: other
---
# superhot-13b-8k-4bit-32g-safetensors
**Note: Maximum sequence length (max_seq_len) and compression factor (compress_pos_emb) need to be set to 8192 (or lower) and 4.**
Merged base LLaMA and LoRA with this:
https://github.com/tloen/alpaca-lora
Base LLaMA 13B:
https://huggingface.co/huggyllama/llama-13b
SuperHOT 13B 8k no-rlhf-test LoRA:
https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test
``` sh
BASE_MODEL=huggyllama_llama-13b LORA=kaiokendev_superhot-13b-8k-no-rlhf-test python export_hf_checkpoint.py
```
Quantized with AutoGPTQ:
https://github.com/PanQiWei/AutoGPTQ
``` sh
python quant_with_alpaca.py --pretrained_model_dir superhot-13b-8k-safetensors --quantized_model_dir superhot-13b-8k-no-rlhf-test-32g-GPTQ --bits 4 --group_size 32 --desc_act --num_samples 256 --save_and_reload
```
Perplexity:
```
CUDA_VISIBLE_DEVICES=0 python test_benchmark_inference.py \
-d /workspace/models/superhot-13b-8k-no-rlhf-test-32g-GPTQ \
-ppl \
-ppl_ds datasets/wikitext2.txt \
-l 8192 \
-cpe 4 \
-ppl_cn 40 \
-ppl_cs 8192 \
-ppl_ct 8192
-- Perplexity:
-- - Dataset: datasets/wikitext2.txt
-- - Chunks: 40
-- - Chunk size: 8192 -> 8192
-- - Chunk overlap: 0
-- - Min. chunk size: 50
-- - Key: text
-- Tokenizer: /workspace/models/superhot-13b-8k-no-rlhf-test-32g-GPTQ/tokenizer.model
-- Model config: /workspace/models/superhot-13b-8k-no-rlhf-test-32g-GPTQ/config.json
-- Model: /workspace/models/superhot-13b-8k-no-rlhf-test-32g-GPTQ/4bit-32g.safetensors
-- Sequence length: 8192
-- RoPE compression factor: 4.0
-- Tuning:
-- --matmul_recons_thd: 8
-- --fused_mlp_thd: 2
-- --sdp_thd: 8
-- Options: ['perplexity']
** Time, Load model: 4.23 seconds
** Time, Load tokenizer: 0.01 seconds
-- Groupsize (inferred): 32
-- Act-order (inferred): yes
** VRAM, Model: [cuda:0] 7,732.62 MB
-- Loading dataset...
-- Testing 40 chunks....
** Perplexity: 5.4066
```
|
tmpupload/superhot-13b-16k-no-rlhf-test-GPTQ
|
tmpupload
| 2023-06-27T02:13:55Z | 7 | 4 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-26T00:42:30Z |
---
license: other
---
# superhot-13b-16k-4bit--1g-safetensors
**Note: Maximum sequence length (max_seq_len) and compression factor (compress_pos_emb) need to be set to 16384 (or lower) and 8.**
Merged base LLaMA and LoRA with this:
https://github.com/tloen/alpaca-lora
Base LLaMA 13B:
https://huggingface.co/huggyllama/llama-13b
SuperHOT 13B 16k no-rlhf-test LoRA:
https://huggingface.co/kaiokendev/superhot-13b-16k-no-rlhf-test
``` sh
BASE_MODEL=huggyllama_llama-13b LORA=kaiokendev_superhot-13b-16k-no-rlhf-test python export_hf_checkpoint.py
```
Quantized with AutoGPTQ:
https://github.com/PanQiWei/AutoGPTQ
``` sh
python quant_with_alpaca.py --pretrained_model_dir superhot-13b-16k-safetensors --quantized_model_dir superhot-13b-16k-4bit--1g-safetensors --bits 4 --group_size -1 --desc_act --num_samples 256 --save_and_reload
```
Perplexity:
```
CUDA_VISIBLE_DEVICES=0 python test_benchmark_inference.py \
-d /workspace/models/superhot-13b-16k-4bit--1g-safetensors \
-ppl \
-ppl_ds datasets/wikitext2.txt \
-l 16384 \
-cpe 8 \
-ppl_cn 40 \
-ppl_cs 16384 \
-ppl_ct 16384
-- Perplexity:
-- - Dataset: datasets/wikitext2.txt
-- - Chunks: 40
-- - Chunk size: 16384 -> 16384
-- - Chunk overlap: 0
-- - Min. chunk size: 50
-- - Key: text
-- Tokenizer: /workspace/models/superhot-13b-16k-4bit--1g-safetensors/tokenizer.model
-- Model config: /workspace/models/superhot-13b-16k-4bit--1g-safetensors/config.json
-- Model: /workspace/models/superhot-13b-16k-4bit--1g-safetensors/4bit.safetensors
-- Sequence length: 16384
-- RoPE compression factor: 8.0
-- Tuning:
-- --matmul_recons_thd: 8
-- --fused_mlp_thd: 2
-- --sdp_thd: 8
-- Options: ['perplexity']
** Time, Load model: 3.69 seconds
** Time, Load tokenizer: 0.01 seconds
-- Groupsize (inferred): None
-- Act-order (inferred): no
!! Model has empty group index (discarded)
** VRAM, Model: [cuda:0] 6,974.74 MB
-- Loading dataset...
-- Testing 21 chunks...
** Perplexity: 7.5462
```
|
akaneshiro/Reinforce-CartPole
|
akaneshiro
| 2023-06-27T02:08:20Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T02:08:09Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Arielkanevsky/Complaints_Classifier
|
Arielkanevsky
| 2023-06-27T02:05:29Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-02T00:49:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Complaints_Classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Complaints_Classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0009
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 412 | 0.0239 | 0.9946 |
| 0.0678 | 2.0 | 824 | 0.0009 | 1.0 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
roa7n/llama_human_ocr_ensembl
|
roa7n
| 2023-06-27T01:55:10Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-24T17:55:27Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
tmpupload/superhot-30b-8k-no-rlhf-test-GGML
|
tmpupload
| 2023-06-27T01:53:58Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2023-06-26T10:05:23Z |
---
license: other
---
# superhot-30b-8k-no-rlhf-test-GGML
**Note: LLAMA_ROPE_SCALE from PR [#1967](https://github.com/ggerganov/llama.cpp/pull/1967) needs to be set to 0.25**
Merged base LLaMA and LoRA with this:
https://github.com/tloen/alpaca-lora
Base LLaMA 30B:
https://huggingface.co/huggyllama/llama-30b
SuperHOT 30B 8k no-rlhf-test LoRA:
https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test
``` sh
BASE_MODEL=huggyllama_llama-30b LORA=kaiokendev_superhot-30b-8k-no-rlhf-test python export_hf_checkpoint.py
```
Converted and quantized with llama.cpp commit `447ccbe`:
``` sh
python convert.py superhot-30b-8k-safetensors --outtype f32 --outfile superhot-30b-8k-no-rlhf-test.ggmlv3.f32.bin
./bin/quantize superhot-30b-8k-no-rlhf-test.ggmlv3.f32.bin superhot-30b-8k-no-rlhf-test.ggmlv3.Q2_K.bin Q2_K
```
|
tmpupload/superhot-13b-8k-no-rlhf-test-GGML
|
tmpupload
| 2023-06-27T01:52:44Z | 0 | 5 | null |
[
"license:other",
"region:us"
] | null | 2023-06-26T04:56:30Z |
---
license: other
---
# superhot-13b-8k-no-rlhf-test-GGML
**Note: LLAMA_ROPE_SCALE from PR [#1967](https://github.com/ggerganov/llama.cpp/pull/1967) needs to be set to 0.25**
Merged base LLaMA and LoRA with this:
https://github.com/tloen/alpaca-lora
Base LLaMA 13B:
https://huggingface.co/huggyllama/llama-13b
SuperHOT 13B 8k no-rlhf-test LoRA:
https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test
``` sh
BASE_MODEL=huggyllama_llama-13b LORA=kaiokendev_superhot-13b-8k-no-rlhf-test python export_hf_checkpoint.py
```
Converted and quantized with llama.cpp commit `447ccbe`:
``` sh
python convert.py superhot-13b-8k-safetensors --outtype f32 --outfile superhot-13b-8k-no-rlhf-test.ggmlv3.f32.bin
./bin/quantize superhot-13b-8k-no-rlhf-test.ggmlv3.f32.bin superhot-13b-8k-no-rlhf-test.ggmlv3.Q2_K.bin Q2_K
```
|
Interfan/abraham
|
Interfan
| 2023-06-27T01:33:46Z | 31 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-27T01:29:57Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Abraham Dreambooth model trained by Interfan with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Osolon/wav2vec2-large-xls-r-300m-pl
|
Osolon
| 2023-06-27T01:20:14Z | 81 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_10_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-08-30T06:46:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_10_0
model-index:
- name: wav2vec2-large-xls-r-300m-pl-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-pl-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_10_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1589
- Wer: 0.1338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 765
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.9573 | 1.31 | 1000 | 3.2143 | 1.0 |
| 1.0216 | 2.61 | 2000 | 0.2310 | 0.2064 |
| 0.2925 | 3.92 | 3000 | 0.1888 | 0.1689 |
| 0.2252 | 5.23 | 4000 | 0.1731 | 0.1507 |
| 0.1983 | 6.54 | 5000 | 0.1722 | 0.1446 |
| 0.1757 | 7.84 | 6000 | 0.1637 | 0.1367 |
| 0.1656 | 9.15 | 7000 | 0.1589 | 0.1338 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sid/dqn-SpaceInvadersNoFrameskip-v4-test
|
sid
| 2023-06-27T01:03:15Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T01:02:54Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 952.50 +/- 279.56
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sid -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sid -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sid
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
GranataKaoruChigusa/hokkaidomusumetondenheis
|
GranataKaoruChigusa
| 2023-06-27T00:58:42Z | 0 | 0 |
fairseq
|
[
"fairseq",
"legal",
"biology",
"chemistry",
"art",
"medical",
"text-classification",
"aa",
"dataset:gsdf/EasyNegative",
"dataset:fka/awesome-chatgpt-prompts",
"dataset:gsdf/Counterfeit-V3.0",
"doi:10.57967/hf/0817",
"license:artistic-2.0",
"region:us"
] |
text-classification
| 2023-03-02T00:01:16Z |
---
license: artistic-2.0
datasets:
- gsdf/EasyNegative
- fka/awesome-chatgpt-prompts
- gsdf/Counterfeit-V3.0
language:
- aa
metrics:
- accuracy
- bertscore
- character
library_name: fairseq
pipeline_tag: text-classification
tags:
- legal
- biology
- chemistry
- art
- medical
---
|
ayertey01/wav2vec2-large-xlsr-53-AsanteTwi-04
|
ayertey01
| 2023-06-27T00:54:23Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-26T21:30:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_13_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-53-AsanteTwi-04
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_13_0
type: common_voice_13_0
config: tw
split: test
args: tw
metrics:
- name: Wer
type: wer
value: 0.625
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-AsanteTwi-04
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7250
- Wer: 0.625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 90
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-----:|
| 13.2642 | 8.33 | 50 | 4.7327 | 1.0 |
| 3.1075 | 16.67 | 100 | 3.1680 | 1.0 |
| 2.8849 | 25.0 | 150 | 2.9745 | 1.0 |
| 2.8553 | 33.33 | 200 | 2.9167 | 1.0 |
| 2.8333 | 41.67 | 250 | 2.8538 | 1.0 |
| 2.6501 | 50.0 | 300 | 2.3417 | 1.0 |
| 1.8966 | 58.33 | 350 | 1.1529 | 0.875 |
| 0.9431 | 66.67 | 400 | 0.8519 | 0.75 |
| 0.5951 | 75.0 | 450 | 0.7970 | 0.625 |
| 0.444 | 83.33 | 500 | 0.7250 | 0.625 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
QMB15/Wizard-Vicuna-30B-SuperHOT-8k-test-GPTQ
|
QMB15
| 2023-06-27T00:46:37Z | 5 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-26T23:53:45Z |
This is https://huggingface.co/ehartford/Wizard-Vicuna-30B-Uncensored, merged with https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test, then quantized to 4bit with AutoGPTQ.
There are two quantized versions. One is a plain 4bit version with only act-order and no groupsize. The other is an experimental version using groupsize 128, act-order, and kaiokendev's ScaledLLamaAttention monkey patch applied *during* quantization, the idea being to help the calibration account for the new scale. It seems to have worked as it improves by around 0.04 ppl vs the unpatched quant - maybe not worth the trouble, but it's better so I'll put it up anyway.
|
davidmunechika/coreml-openjourney
|
davidmunechika
| 2023-06-26T23:45:58Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-13T22:21:59Z |
---
license: creativeml-openrail-m
---
|
lamoglia/ppo-Huggy
|
lamoglia
| 2023-06-26T23:26:49Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-26T23:26:46Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: lamoglia/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
wunderwuzzi/huskyai
|
wunderwuzzi
| 2023-06-26T23:15:20Z | 0 | 2 | null |
[
"license:mit",
"region:us"
] | null | 2023-06-26T23:08:50Z |
---
license: mit
---
# Husky AI - Overview
Husky AI is part of my [machine learning attack series](https://embracethered.com/blog/posts/2020/machine-learning-attack-series-overview/). This repo contains the models and code to run the Husky AI web server.
Have fun learning more about machine learning!

|
harpomaxx/ppo-LunarLander-v2
|
harpomaxx
| 2023-06-26T23:12:32Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T22:40:06Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.12 +/- 13.86
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
askatasuna/psy_q_a_test
|
askatasuna
| 2023-06-26T22:58:59Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-26T20:30:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: psy_q_a_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# psy_q_a_test
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.4872 | 1.0 | 3593 | 1.4473 |
| 1.444 | 2.0 | 7186 | 1.3626 |
| 1.3245 | 3.0 | 10779 | 1.3329 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dtcalabro/test_model
|
dtcalabro
| 2023-06-26T22:54:05Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-26T16:31:14Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cupcakeDriveby/rlClass
|
cupcakeDriveby
| 2023-06-26T22:45:32Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T22:45:14Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.48 +/- 21.03
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AImod3ls/aether-green
|
AImod3ls
| 2023-06-26T22:37:25Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-26T22:33:01Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### aether-green Dreambooth model trained by AImod3ls with (https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
.jpg)
.png)
.png)
.png)
.png)
.png)
.png)
|
OumaElha/speech8
|
OumaElha
| 2023-06-26T22:20:28Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-26T22:12:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: speech8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speech8
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
lucasairvc/dominicrvc
|
lucasairvc
| 2023-06-26T22:18:44Z | 0 | 0 | null |
[
"music",
"en",
"license:lgpl-3.0",
"region:us"
] | null | 2023-06-26T21:59:25Z |
---
license: lgpl-3.0
language:
- en
tags:
- music
---
# DOMINIC JAMES RVC MODEL

[Download It Here! NOW!!!](https://huggingface.co/lucasairvc/dominicrvc/blob/main/dominicjamesgaming.zip)
Check out one of the demo covers here: [without me](https://huggingface.co/lucasairvc/dominicrvc/resolve/main/DOMINICJAMES_WITHOUT-ME.mp3)
How to install:
Colab:
- Right click on the download link, copy link as. Enter it into step 2 and run. It's installed.
Local:
- Download the file. Put the "added_" file into logs, and put the dommydom.pth in the weights folder. It's installed.
ENJOY!
|
maidh/Reinforce-Pixelcopter-PLE-v0
|
maidh
| 2023-06-26T22:16:48Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T22:16:25Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 13.50 +/- 15.42
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.