modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-09 12:33:01
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 550
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-09 12:32:40
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
SamJoshua/llama2-qlora-orca
|
SamJoshua
| 2023-09-03T13:05:21Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-03T13:05:14Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
SharKRippeR/QA_model
|
SharKRippeR
| 2023-09-03T13:02:50Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-09-03T12:54:56Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: QA_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# QA_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.0433 |
| 2.6909 | 2.0 | 500 | 1.5259 |
| 2.6909 | 3.0 | 750 | 1.4599 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
petals-team/falcon-rw-1b
|
petals-team
| 2023-09-03T12:56:43Z | 168 | 2 |
transformers
|
[
"transformers",
"safetensors",
"falcon",
"text-generation",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2306.01116",
"arxiv:2005.14165",
"arxiv:2108.12409",
"arxiv:2205.14135",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-09-03T12:55:54Z |
---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: false
license: apache-2.0
---
# Falcon-RW-1B
**Falcon-RW-1B is a 1B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on 350B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb). It is made available under the Apache 2.0 license.**
See the 📓 [paper on arXiv](https://arxiv.org/abs/2306.01116) for more details.
RefinedWeb is a high-quality web dataset built by leveraging stringent filtering and large-scale deduplication. Falcon-RW-1B, trained on RefinedWeb only, matches or outperforms comparable models trained on curated data.
⚠️ Falcon is now available as a core model in the `transformers` library! To use the in-library version, please install the latest version of `transformers` with `pip install git+https://github.com/huggingface/transformers.git`, then simply remove the `trust_remote_code=True` argument from `from_pretrained()`.
⚠️ This model is intended for use as a **research artifact**, to study the influence of training on web data alone. **If you are interested in state-of-the-art models, we recommend using Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b), both trained on >1,000 billion tokens.**
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-rw-1b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
# Model Card for Falcon-RW-1B
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English;
- **License:** Apache 2.0.
### Model Source
- **Paper:** [https://arxiv.org/abs/2306.01116](https://arxiv.org/abs/2306.01116).
## Uses
### Direct Use
Research on large language models, specifically the influence of adequately filtered and deduplicated web data on the properties of large language models (fairness, safety, limitations, capabilities, etc.).
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
Broadly speaking, we would recommend Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) for any use not directly related to research on web data pipelines.
## Bias, Risks, and Limitations
Falcon-RW-1B is trained on English data only, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-RW-1B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-rw-1b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-RW-1B was trained on 350B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset. The data was tokenized with the GPT-2 tokenizer.
### Training Procedure
Falcon-RW-1B was trained on 32 A100 40GB GPUs, using only data parallelism with ZeRO.
#### Training Hyperparameters
Hyperparameters were adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)).
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|------------|-------------------------------------------|
| Precision | `bfloat16` | |
| Optimizer | AdamW | |
| Learning rate | 2e-4 | 500M tokens warm-up, cosine decay to 2e-5 |
| Weight decay | 1e-1 | |
| Batch size | 512 | 4B tokens ramp-up |
#### Speeds, Sizes, Times
Training happened in early December 2022 and took about six days.
## Evaluation
See the 📓 [paper on arXiv](https://arxiv.org/abs/2306.01116) for in-depth evaluation.
## Technical Specifications
### Model Architecture and Objective
Falcon-RW-1B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), but uses ALiBi ([Ofir et al., 2021](https://arxiv.org/abs/2108.12409)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)).
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 24 | |
| `d_model` | 2048 | |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 50304 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-RW-1B was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.
#### Software
Falcon-RW-1B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
## Contact
falconllm@tii.ae
|
franziskaM/b29-wav2vec2-large-xls-r-romansh-colab
|
franziskaM
| 2023-09-03T12:46:48Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-03T10:53:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_13_0
metrics:
- wer
model-index:
- name: b29-wav2vec2-large-xls-r-romansh-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_13_0
type: common_voice_13_0
config: rm-vallader
split: test
args: rm-vallader
metrics:
- name: Wer
type: wer
value: 0.231951560316721
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# b29-wav2vec2-large-xls-r-romansh-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_13_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2967
- Wer: 0.2320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.3337 | 3.05 | 400 | 2.9529 | 1.0 |
| 2.9274 | 6.11 | 800 | 2.8462 | 0.9995 |
| 1.0082 | 9.16 | 1200 | 0.3782 | 0.3628 |
| 0.2754 | 12.21 | 1600 | 0.3225 | 0.2857 |
| 0.168 | 15.27 | 2000 | 0.3102 | 0.2748 |
| 0.1198 | 18.32 | 2400 | 0.3077 | 0.2513 |
| 0.1053 | 21.37 | 2800 | 0.3086 | 0.2531 |
| 0.0829 | 24.43 | 3200 | 0.2985 | 0.2396 |
| 0.0726 | 27.48 | 3600 | 0.2967 | 0.2320 |
### Framework versions
- Transformers 4.26.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
kargaranamir/Hengam
|
kargaranamir
| 2023-09-03T12:40:16Z | 5 | 3 |
span-marker
|
[
"span-marker",
"token-classification",
"ner",
"named-entity-recognition",
"fa",
"dataset:kargaranamir/HengamCorpus",
"license:mit",
"region:us"
] |
token-classification
| 2022-10-21T21:05:04Z |
---
license: mit
datasets:
- kargaranamir/HengamCorpus
tags:
- span-marker
- token-classification
- ner
- named-entity-recognition
pipeline_tag: token-classification
inference: false
language:
- fa
---
# Hengam: An Adversarially Trained Transformer for Persian Temporal Tagging
# Usage
You can use this model directly downloading the utils and requirements files and installing requirements:
```python
>>> ! wget https://huggingface.co/kargaranamir/Hengam/raw/main/utils.py
>>> ! wget https://huggingface.co/kargaranamir/Hengam/raw/main/requirements.txt
>>> ! pip install -r requirements.txt
```
and downloading the models HengamTransA.pth or HengamTransW.pth and building ner pipline:
```python
>>> import torch
>>> from huggingface_hub import hf_hub_download
>>> from utils import *
>>> # HengamTransW = hf_hub_download(repo_id="kargaranamir/Hengam", filename="HengamTransW.pth")
>>> HengamTransA = hf_hub_download(repo_id="kargaranamir/Hengam", filename="HengamTransA.pth")
```
```python
>>> # ner = NER(model_path=HengamTransW, tags=['B-TIM', 'I-TIM', 'B-DAT', 'I-DAT', 'O'])
>>> ner = NER(model_path=HengamTransA, tags=['B-TIM', 'I-TIM', 'B-DAT', 'I-DAT', 'O'])
>>> ner('.سلام من و دوستم ساعت ۸ صبح روز سه شنبه رفتیم دوشنبه بازار ')
[{'Text': 'ساعت', 'Tag': 'B-TIM', 'Start': 17, 'End': 21},
{'Text': '۸', 'Tag': 'I-TIM', 'Start': 22, 'End': 23},
{'Text': 'صبح', 'Tag': 'I-TIM', 'Start': 24, 'End': 27},
{'Text': 'روز', 'Tag': 'I-TIM', 'Start': 28, 'End': 31},
{'Text': 'سه', 'Tag': 'B-DAT', 'Start': 32, 'End': 34},
{'Text': 'شنبه', 'Tag': 'I-DAT', 'Start': 35, 'End': 39}]
```
## Citation
If you use any part of this repository in your research, please cite it using the following BibTex entry.
```python
@inproceedings{mirzababaei-etal-2022-hengam,
title = {Hengam: An Adversarially Trained Transformer for {P}ersian Temporal Tagging},
author = {Mirzababaei, Sajad and Kargaran, Amir Hossein and Sch{\"u}tze, Hinrich and Asgari, Ehsaneddin},
year = 2022,
booktitle = {Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing},
publisher = {Association for Computational Linguistics},
address = {Online only},
pages = {1013--1024},
url = {https://aclanthology.org/2022.aacl-main.74}
}
```
|
bigmorning/whisper_input_decoder_no_lob__0015
|
bigmorning
| 2023-09-03T12:34:04Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-03T12:33:56Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_no_lob__0015
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_no_lob__0015
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.9595
- Train Accuracy: 0.0138
- Train Wermet: 0.7012
- Validation Loss: 3.1493
- Validation Accuracy: 0.0132
- Validation Wermet: 0.7718
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.4122 | 0.0107 | 0.9328 | 3.9759 | 0.0114 | 0.9606 | 0 |
| 4.7176 | 0.0116 | 0.8683 | 3.9404 | 0.0114 | 0.9334 | 1 |
| 4.6750 | 0.0117 | 0.8478 | 3.9211 | 0.0115 | 0.9237 | 2 |
| 4.6511 | 0.0117 | 0.8413 | 3.8864 | 0.0115 | 0.9331 | 3 |
| 4.6294 | 0.0118 | 0.8270 | 3.8729 | 0.0115 | 0.9228 | 4 |
| 4.6134 | 0.0118 | 0.8199 | 3.8690 | 0.0114 | 0.9451 | 5 |
| 4.5980 | 0.0118 | 0.8102 | 3.8491 | 0.0115 | 0.9152 | 6 |
| 4.5759 | 0.0119 | 0.7890 | 3.8366 | 0.0116 | 0.8691 | 7 |
| 4.5518 | 0.0120 | 0.7694 | 3.8081 | 0.0116 | 0.9013 | 8 |
| 4.5219 | 0.0121 | 0.7591 | 3.7734 | 0.0118 | 0.8383 | 9 |
| 4.4761 | 0.0122 | 0.7400 | 3.7156 | 0.0120 | 0.8125 | 10 |
| 4.4139 | 0.0125 | 0.7257 | 3.6311 | 0.0121 | 0.8188 | 11 |
| 4.3113 | 0.0128 | 0.7127 | 3.5089 | 0.0124 | 0.8008 | 12 |
| 4.1608 | 0.0132 | 0.7088 | 3.3587 | 0.0127 | 0.7742 | 13 |
| 3.9595 | 0.0138 | 0.7012 | 3.1493 | 0.0132 | 0.7718 | 14 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
bigmorning/whisper_input_decoder_no_lob__0010
|
bigmorning
| 2023-09-03T12:20:57Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-03T12:20:49Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_no_lob__0010
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_no_lob__0010
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.5219
- Train Accuracy: 0.0121
- Train Wermet: 0.7591
- Validation Loss: 3.7734
- Validation Accuracy: 0.0118
- Validation Wermet: 0.8383
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.4122 | 0.0107 | 0.9328 | 3.9759 | 0.0114 | 0.9606 | 0 |
| 4.7176 | 0.0116 | 0.8683 | 3.9404 | 0.0114 | 0.9334 | 1 |
| 4.6750 | 0.0117 | 0.8478 | 3.9211 | 0.0115 | 0.9237 | 2 |
| 4.6511 | 0.0117 | 0.8413 | 3.8864 | 0.0115 | 0.9331 | 3 |
| 4.6294 | 0.0118 | 0.8270 | 3.8729 | 0.0115 | 0.9228 | 4 |
| 4.6134 | 0.0118 | 0.8199 | 3.8690 | 0.0114 | 0.9451 | 5 |
| 4.5980 | 0.0118 | 0.8102 | 3.8491 | 0.0115 | 0.9152 | 6 |
| 4.5759 | 0.0119 | 0.7890 | 3.8366 | 0.0116 | 0.8691 | 7 |
| 4.5518 | 0.0120 | 0.7694 | 3.8081 | 0.0116 | 0.9013 | 8 |
| 4.5219 | 0.0121 | 0.7591 | 3.7734 | 0.0118 | 0.8383 | 9 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
VinayHajare/ppo-Huggy
|
VinayHajare
| 2023-09-03T12:19:57Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-09-03T12:19:51Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: VinayHajare/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
spinor75/qlora-koalpaca-polyglot-12.8b-100step
|
spinor75
| 2023-09-03T12:14:19Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-03T12:14:16Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
bigmorning/whisper_input_decoder_no_lob__0005
|
bigmorning
| 2023-09-03T12:07:50Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-03T12:07:42Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_no_lob__0005
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_no_lob__0005
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.6294
- Train Accuracy: 0.0118
- Train Wermet: 0.8270
- Validation Loss: 3.8729
- Validation Accuracy: 0.0115
- Validation Wermet: 0.9228
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.4122 | 0.0107 | 0.9328 | 3.9759 | 0.0114 | 0.9606 | 0 |
| 4.7176 | 0.0116 | 0.8683 | 3.9404 | 0.0114 | 0.9334 | 1 |
| 4.6750 | 0.0117 | 0.8478 | 3.9211 | 0.0115 | 0.9237 | 2 |
| 4.6511 | 0.0117 | 0.8413 | 3.8864 | 0.0115 | 0.9331 | 3 |
| 4.6294 | 0.0118 | 0.8270 | 3.8729 | 0.0115 | 0.9228 | 4 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
jaober/Pixelcopter-PLE-v0
|
jaober
| 2023-09-03T12:03:42Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-02T21:29:22Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 4.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Kamer/FlavioNoEng
|
Kamer
| 2023-09-03T11:59:33Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:nlpaueb/legal-bert-base-uncased",
"base_model:finetune:nlpaueb/legal-bert-base-uncased",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-03T10:53:41Z |
---
license: cc-by-sa-4.0
base_model: nlpaueb/legal-bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: FlavioNoEng
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FlavioNoEng
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4980
- eval_Accuracy: 0.8841
- eval_F1_macro: 0.7387
- eval_F1_class_0: 0.9302
- eval_F1_class_1: 0.0
- eval_F1_class_2: 0.8950
- eval_F1_class_3: 0.8000
- eval_F1_class_4: 0.8000
- eval_F1_class_5: 0.9057
- eval_F1_class_6: 0.7170
- eval_F1_class_7: 0.9663
- eval_F1_class_8: 0.9831
- eval_F1_class_9: 0.7931
- eval_F1_class_10: 0.8483
- eval_F1_class_11: 0.8333
- eval_F1_class_12: 0.7975
- eval_F1_class_13: 0.5714
- eval_F1_class_14: 0.8734
- eval_F1_class_15: 0.3077
- eval_F1_class_16: 0.0
- eval_F1_class_17: 0.9760
- eval_F1_class_18: 0.8525
- eval_F1_class_19: 0.9231
- eval_runtime: 34.849
- eval_samples_per_second: 32.426
- eval_steps_per_second: 2.037
- epoch: 3.93
- step: 2500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
YassineBenlaria/tamasheq-99-2
|
YassineBenlaria
| 2023-09-03T11:28:32Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:ad019el/tamasheq-99-2",
"base_model:finetune:ad019el/tamasheq-99-2",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-02T21:57:53Z |
---
base_model: ad019el/tamasheq-99-2
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: tamasheq-99-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tamasheq-99-2
This model is a fine-tuned version of [ad019el/tamasheq-99-2](https://huggingface.co/ad019el/tamasheq-99-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3830
- Wer: 0.8701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.9932 | 15.79 | 300 | 3.5172 | 1.0 |
| 2.9067 | 31.58 | 600 | 1.7973 | 1.0282 |
| 0.7973 | 47.37 | 900 | 1.1744 | 0.8757 |
| 0.4535 | 63.16 | 1200 | 1.2484 | 0.8475 |
| 0.3511 | 78.95 | 1500 | 1.3254 | 0.8616 |
| 0.3156 | 94.74 | 1800 | 1.3830 | 0.8701 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
wangrongsheng/Baichuan-13B-Chat-sft-super
|
wangrongsheng
| 2023-09-03T11:24:08Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-03T11:23:05Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
Ahmedhisham/Arabic_dialect_identifier
|
Ahmedhisham
| 2023-09-03T11:12:43Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"text-classification",
"license:mit",
"region:us"
] |
text-classification
| 2023-09-03T10:37:12Z |
---
license: mit
metrics:
- precision
- recall
library_name: keras
pipeline_tag: text-classification
---
|
bigmorning/whisper_attention_1_0005
|
bigmorning
| 2023-09-03T11:10:16Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-03T10:26:23Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_attention_1_0005
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_attention_1_0005
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.0706
- Train Accuracy: 0.0133
- Train Wermet: 1.1544
- Validation Loss: 3.3059
- Validation Accuracy: 0.0127
- Validation Wermet: 2.5474
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 4.3421 | 0.0126 | 1.0868 | 3.5901 | 0.0122 | 1.7563 | 0 |
| 4.2960 | 0.0127 | 1.0419 | 3.5479 | 0.0122 | 1.6770 | 1 |
| 4.2437 | 0.0128 | 1.1301 | 3.4931 | 0.0124 | 1.2281 | 2 |
| 4.1660 | 0.0130 | 1.1307 | 3.4015 | 0.0125 | 1.7745 | 3 |
| 4.0706 | 0.0133 | 1.1544 | 3.3059 | 0.0127 | 2.5474 | 4 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
anik550689/output_model
|
anik550689
| 2023-09-03T10:45:19Z | 6 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-09-03T08:47:01Z |
---
license: openrail++
base_model: /home/ahmed/.cache/huggingface/hub/models--stabilityai--stable-diffusion-xl-base-1.0/snapshots/bf714989e22c57ddc1c453bf74dab4521acb81d8
instance_prompt:
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - anik550689/output_model
These are LoRA adaption weights for /home/ahmed/.cache/huggingface/hub/models--stabilityai--stable-diffusion-xl-base-1.0/snapshots/bf714989e22c57ddc1c453bf74dab4521acb81d8. The weights were trained on using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: True.
Special VAE used for training: None.
|
Echolist-yixuan/chatglm2-6b-qlora
|
Echolist-yixuan
| 2023-09-03T10:38:54Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"chatglm",
"feature-extraction",
"custom_code",
"license:afl-3.0",
"region:us"
] |
feature-extraction
| 2023-09-03T10:25:19Z |
---
license: afl-3.0
---
This model is a fine-tuned model based on chatGLM2-6b with QLoRA. The only difference between this model and chatGLM2-6b should be the knowledge of "LoRA" and 'QLoRA' technique.
|
Muhammadreza/mann-e-comics-revised-2
|
Muhammadreza
| 2023-09-03T10:08:27Z | 15 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-03T09:55:31Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### mann-e_comics-revised-2 Dreambooth model trained by Muhammadreza with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
AmelieSchreiber/esm2_t6_8M_finetuned_human_protein_binding_sites
|
AmelieSchreiber
| 2023-09-03T10:06:52Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"esm",
"token-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-03T10:02:59Z |
---
license: mit
language:
- en
library_name: transformers
---
# ESM-2 for Predicting Binding Sites of Human Proteins
```
Precision: 0.5381751045207555
Recall: 0.9426927311243982
F1 Score: 0.5602464778964296
```
|
bigmorning/whisper_attention_0015
|
bigmorning
| 2023-09-03T10:05:13Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-03T10:05:04Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_attention_0015
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_attention_0015
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.1328
- Train Accuracy: 0.0132
- Train Wermet: 1.1476
- Validation Loss: 3.2918
- Validation Accuracy: 0.0129
- Validation Wermet: 1.3463
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.4192 | 0.0107 | 1.9359 | 3.9929 | 0.0112 | 3.4029 | 0 |
| 4.7175 | 0.0116 | 1.3557 | 3.9525 | 0.0113 | 3.2613 | 1 |
| 4.6756 | 0.0117 | 1.4198 | 3.9189 | 0.0113 | 2.6795 | 2 |
| 4.6543 | 0.0117 | 1.3165 | 3.9021 | 0.0114 | 2.2678 | 3 |
| 4.6317 | 0.0118 | 1.2794 | 3.8796 | 0.0114 | 1.8964 | 4 |
| 4.6128 | 0.0118 | 1.2033 | 3.8579 | 0.0115 | 1.6353 | 5 |
| 4.5945 | 0.0118 | 1.1814 | 3.8787 | 0.0114 | 3.6041 | 6 |
| 4.5719 | 0.0119 | 1.1171 | 3.8418 | 0.0116 | 1.1922 | 7 |
| 4.5503 | 0.0120 | 1.1435 | 3.8061 | 0.0117 | 1.8502 | 8 |
| 4.5235 | 0.0121 | 1.0483 | 3.7736 | 0.0118 | 1.4279 | 9 |
| 4.4837 | 0.0122 | 1.0371 | 3.7294 | 0.0119 | 1.6705 | 10 |
| 4.4401 | 0.0123 | 1.0621 | 3.6991 | 0.0118 | 3.1038 | 11 |
| 4.3684 | 0.0125 | 1.0436 | 3.6220 | 0.0121 | 3.1267 | 12 |
| 4.2692 | 0.0128 | 1.1086 | 3.4681 | 0.0124 | 1.1431 | 13 |
| 4.1328 | 0.0132 | 1.1476 | 3.2918 | 0.0129 | 1.3463 | 14 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
bigmorning/whisper_attention_0010
|
bigmorning
| 2023-09-03T09:51:55Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-03T09:51:43Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_attention_0010
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_attention_0010
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.5235
- Train Accuracy: 0.0121
- Train Wermet: 1.0483
- Validation Loss: 3.7736
- Validation Accuracy: 0.0118
- Validation Wermet: 1.4279
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.4192 | 0.0107 | 1.9359 | 3.9929 | 0.0112 | 3.4029 | 0 |
| 4.7175 | 0.0116 | 1.3557 | 3.9525 | 0.0113 | 3.2613 | 1 |
| 4.6756 | 0.0117 | 1.4198 | 3.9189 | 0.0113 | 2.6795 | 2 |
| 4.6543 | 0.0117 | 1.3165 | 3.9021 | 0.0114 | 2.2678 | 3 |
| 4.6317 | 0.0118 | 1.2794 | 3.8796 | 0.0114 | 1.8964 | 4 |
| 4.6128 | 0.0118 | 1.2033 | 3.8579 | 0.0115 | 1.6353 | 5 |
| 4.5945 | 0.0118 | 1.1814 | 3.8787 | 0.0114 | 3.6041 | 6 |
| 4.5719 | 0.0119 | 1.1171 | 3.8418 | 0.0116 | 1.1922 | 7 |
| 4.5503 | 0.0120 | 1.1435 | 3.8061 | 0.0117 | 1.8502 | 8 |
| 4.5235 | 0.0121 | 1.0483 | 3.7736 | 0.0118 | 1.4279 | 9 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
AshutoshD245/food_classifier
|
AshutoshD245
| 2023-09-03T09:12:52Z | 63 | 1 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-03T05:07:32Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: AshutoshD245/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# AshutoshD245/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3889
- Validation Loss: 0.3585
- Train Accuracy: 0.914
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.8233 | 1.6956 | 0.808 | 0 |
| 1.2230 | 0.8527 | 0.882 | 1 |
| 0.7043 | 0.5496 | 0.896 | 2 |
| 0.4912 | 0.4837 | 0.882 | 3 |
| 0.3889 | 0.3585 | 0.914 | 4 |
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
LarryAIDraw/mizuhara_chizuru-07
|
LarryAIDraw
| 2023-09-03T09:12:46Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-03T09:08:13Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/139211/chizuru-mizuhara
|
LarryAIDraw/saraliya_DG
|
LarryAIDraw
| 2023-09-03T09:12:17Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-03T09:07:51Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/139196/saraliya-corwen-log-horizon
|
LarryAIDraw/fenniS_CB-v1
|
LarryAIDraw
| 2023-09-03T09:11:21Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-03T09:07:25Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/138586/or-fenny-or-or-snowbreak-containment-zone-or-or
|
LarryAIDraw/magahara_desumi
|
LarryAIDraw
| 2023-09-03T09:10:55Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-03T09:06:59Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/138684/desumi-magahara-love-after-world-domination-or
|
LarryAIDraw/ayanami_niconico
|
LarryAIDraw
| 2023-09-03T09:09:57Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-03T09:05:06Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/138764/ayanami-niconico-or-niconico-or-azur-lane
|
bendico765/DuplicatiDistillBertFullTraining
|
bendico765
| 2023-09-03T09:05:36Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-02T14:58:34Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: DuplicatiDistillBertFullTraining
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DuplicatiDistillBertFullTraining
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4670
- Accuracy: 0.8904
- F1 Macro: 0.8349
- F1 Class 0: 0.9526
- F1 Class 1: 0.6667
- F1 Class 2: 0.8398
- F1 Class 3: 0.8278
- F1 Class 4: 0.8050
- F1 Class 5: 0.9111
- F1 Class 6: 0.8943
- F1 Class 7: 0.9504
- F1 Class 8: 0.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | F1 Class 0 | F1 Class 1 | F1 Class 2 | F1 Class 3 | F1 Class 4 | F1 Class 5 | F1 Class 6 | F1 Class 7 | F1 Class 8 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|
| 1.3064 | 0.25 | 250 | 0.7912 | 0.7411 | 0.5153 | 0.9353 | 0.0 | 0.5769 | 0.0 | 0.5222 | 0.8352 | 0.8477 | 0.9206 | 0.0 |
| 0.7377 | 0.5 | 500 | 0.6851 | 0.8024 | 0.6114 | 0.9458 | 0.0 | 0.6388 | 0.6040 | 0.6406 | 0.8646 | 0.8772 | 0.9313 | 0.0 |
| 0.5968 | 0.75 | 750 | 0.5917 | 0.8421 | 0.6460 | 0.9474 | 0.0 | 0.7722 | 0.7052 | 0.6909 | 0.8887 | 0.8812 | 0.9281 | 0.0 |
| 0.5028 | 1.01 | 1000 | 0.5893 | 0.8502 | 0.6523 | 0.9476 | 0.0 | 0.7700 | 0.7263 | 0.7564 | 0.8674 | 0.8537 | 0.9497 | 0.0 |
| 0.4657 | 1.26 | 1250 | 0.5319 | 0.8663 | 0.6671 | 0.9493 | 0.0 | 0.7830 | 0.7870 | 0.7650 | 0.8965 | 0.8777 | 0.9457 | 0.0 |
| 0.4047 | 1.51 | 1500 | 0.5214 | 0.8708 | 0.7452 | 0.9492 | 0.0 | 0.8141 | 0.7774 | 0.7784 | 0.8755 | 0.8978 | 0.9477 | 0.6667 |
| 0.4021 | 1.76 | 1750 | 0.5208 | 0.8773 | 0.7344 | 0.9476 | 0.0 | 0.7609 | 0.7879 | 0.8015 | 0.9156 | 0.8945 | 0.9563 | 0.5455 |
| 0.4 | 2.01 | 2000 | 0.4734 | 0.8879 | 0.8306 | 0.9527 | 0.6667 | 0.8274 | 0.8047 | 0.7965 | 0.9217 | 0.8856 | 0.9531 | 0.6667 |
| 0.2616 | 2.26 | 2250 | 0.5733 | 0.8763 | 0.7283 | 0.9577 | 0.0 | 0.7973 | 0.7926 | 0.8100 | 0.9012 | 0.8978 | 0.9278 | 0.4706 |
| 0.3004 | 2.52 | 2500 | 0.5050 | 0.8934 | 0.7959 | 0.9672 | 0.3333 | 0.8480 | 0.8235 | 0.8051 | 0.9149 | 0.8903 | 0.9556 | 0.625 |
| 0.3136 | 2.77 | 2750 | 0.4735 | 0.8894 | 0.8483 | 0.9511 | 0.9091 | 0.8444 | 0.7893 | 0.7992 | 0.9186 | 0.9 | 0.9514 | 0.5714 |
| 0.3091 | 3.02 | 3000 | 0.4670 | 0.8904 | 0.8349 | 0.9526 | 0.6667 | 0.8398 | 0.8278 | 0.8050 | 0.9111 | 0.8943 | 0.9504 | 0.6667 |
| 0.1983 | 3.27 | 3250 | 0.5770 | 0.8914 | 0.8328 | 0.9551 | 0.7500 | 0.8478 | 0.7956 | 0.8120 | 0.9156 | 0.8884 | 0.9598 | 0.5714 |
| 0.1782 | 3.52 | 3500 | 0.5193 | 0.8974 | 0.8245 | 0.9511 | 0.5714 | 0.8410 | 0.8353 | 0.8225 | 0.9196 | 0.9123 | 0.9521 | 0.6154 |
| 0.2419 | 3.77 | 3750 | 0.4857 | 0.8949 | 0.8129 | 0.9567 | 0.5 | 0.8495 | 0.7988 | 0.8177 | 0.9209 | 0.8980 | 0.9587 | 0.6154 |
| 0.2209 | 4.02 | 4000 | 0.5167 | 0.8994 | 0.7900 | 0.9501 | 0.3333 | 0.8509 | 0.8134 | 0.8345 | 0.9215 | 0.9112 | 0.9621 | 0.5333 |
| 0.1367 | 4.28 | 4250 | 0.6125 | 0.8919 | 0.8537 | 0.9582 | 0.8889 | 0.8411 | 0.8144 | 0.8190 | 0.9066 | 0.8820 | 0.9580 | 0.6154 |
| 0.1523 | 4.53 | 4500 | 0.5453 | 0.8944 | 0.8287 | 0.9565 | 0.7500 | 0.8404 | 0.8249 | 0.8155 | 0.9147 | 0.9002 | 0.9561 | 0.5 |
| 0.1666 | 4.78 | 4750 | 0.5185 | 0.9025 | 0.8497 | 0.9713 | 0.6667 | 0.8392 | 0.8394 | 0.8306 | 0.9226 | 0.9027 | 0.9601 | 0.7143 |
| 0.1388 | 5.03 | 5000 | 0.5815 | 0.8934 | 0.7865 | 0.9583 | 0.3333 | 0.8462 | 0.8288 | 0.8217 | 0.9126 | 0.8908 | 0.9604 | 0.5263 |
| 0.1039 | 5.28 | 5250 | 0.6477 | 0.8929 | 0.8184 | 0.9533 | 0.5 | 0.8431 | 0.8239 | 0.8103 | 0.9150 | 0.8913 | 0.9616 | 0.6667 |
| 0.0942 | 5.53 | 5500 | 0.6873 | 0.8864 | 0.8112 | 0.9603 | 0.6667 | 0.8424 | 0.8033 | 0.8031 | 0.9017 | 0.8914 | 0.9559 | 0.4762 |
| 0.1063 | 5.78 | 5750 | 0.6684 | 0.8944 | 0.8325 | 0.9675 | 0.5714 | 0.8557 | 0.8120 | 0.8204 | 0.9082 | 0.8884 | 0.9547 | 0.7143 |
| 0.0945 | 6.04 | 6000 | 0.6209 | 0.8939 | 0.8183 | 0.9654 | 0.5714 | 0.8537 | 0.8184 | 0.8112 | 0.9175 | 0.8982 | 0.9405 | 0.5882 |
| 0.0771 | 6.29 | 6250 | 0.6268 | 0.8994 | 0.8563 | 0.9638 | 0.7500 | 0.8398 | 0.8363 | 0.8373 | 0.9123 | 0.8924 | 0.9605 | 0.7143 |
| 0.0845 | 6.54 | 6500 | 0.6382 | 0.8939 | 0.8417 | 0.9692 | 0.7500 | 0.8429 | 0.8179 | 0.8151 | 0.9123 | 0.8884 | 0.9548 | 0.625 |
| 0.0673 | 6.79 | 6750 | 0.6561 | 0.9010 | 0.8315 | 0.9693 | 0.5714 | 0.8404 | 0.8214 | 0.8342 | 0.9252 | 0.8928 | 0.9616 | 0.6667 |
| 0.0641 | 7.04 | 7000 | 0.7066 | 0.8879 | 0.8407 | 0.9617 | 0.7500 | 0.8467 | 0.7923 | 0.8107 | 0.9077 | 0.8795 | 0.9512 | 0.6667 |
| 0.039 | 7.29 | 7250 | 0.6932 | 0.8949 | 0.8459 | 0.9659 | 0.7500 | 0.8510 | 0.8079 | 0.8178 | 0.9185 | 0.8767 | 0.9590 | 0.6667 |
| 0.0372 | 7.55 | 7500 | 0.6786 | 0.8984 | 0.8705 | 0.9658 | 0.8889 | 0.8626 | 0.8232 | 0.8194 | 0.9134 | 0.8859 | 0.9607 | 0.7143 |
| 0.0504 | 7.8 | 7750 | 0.6914 | 0.8949 | 0.8598 | 0.9641 | 0.9091 | 0.8478 | 0.8202 | 0.8104 | 0.9177 | 0.8874 | 0.9561 | 0.625 |
| 0.0409 | 8.05 | 8000 | 0.7027 | 0.8984 | 0.8501 | 0.9658 | 0.7500 | 0.8475 | 0.8387 | 0.8195 | 0.9142 | 0.8879 | 0.9607 | 0.6667 |
| 0.0196 | 8.3 | 8250 | 0.7222 | 0.8969 | 0.8530 | 0.9659 | 0.7500 | 0.8492 | 0.8202 | 0.8123 | 0.9184 | 0.8849 | 0.9621 | 0.7143 |
| 0.0323 | 8.55 | 8500 | 0.6858 | 0.8999 | 0.8551 | 0.9697 | 0.8889 | 0.8606 | 0.8235 | 0.8218 | 0.9181 | 0.9015 | 0.9561 | 0.5556 |
| 0.0274 | 8.8 | 8750 | 0.6813 | 0.9010 | 0.8557 | 0.9660 | 0.8889 | 0.8517 | 0.8300 | 0.8270 | 0.9186 | 0.9015 | 0.9618 | 0.5556 |
| 0.0212 | 9.05 | 9000 | 0.7197 | 0.8979 | 0.8608 | 0.9677 | 0.8889 | 0.8456 | 0.8272 | 0.8281 | 0.9111 | 0.8899 | 0.9633 | 0.625 |
| 0.0065 | 9.31 | 9250 | 0.7363 | 0.8979 | 0.8601 | 0.9696 | 0.8889 | 0.8463 | 0.8199 | 0.8220 | 0.9152 | 0.8924 | 0.9618 | 0.625 |
| 0.0115 | 9.56 | 9500 | 0.7331 | 0.8974 | 0.8647 | 0.9677 | 0.8889 | 0.8504 | 0.8249 | 0.8204 | 0.9105 | 0.8909 | 0.9619 | 0.6667 |
| 0.0059 | 9.81 | 9750 | 0.7349 | 0.8989 | 0.8660 | 0.9695 | 0.8889 | 0.8462 | 0.8319 | 0.8226 | 0.9121 | 0.8953 | 0.9606 | 0.6667 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
LarryAIDraw/chara_SonoBisqueDoll_InuiShinju_v1
|
LarryAIDraw
| 2023-09-03T09:03:58Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-03T09:00:24Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/138946/inui-shinju-or-sono-bisque-doll-wa-koi-wo-suru
|
LarryAIDraw/dayuexia_m3
|
LarryAIDraw
| 2023-09-03T09:02:06Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-03T08:59:19Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/139027/grown-up-tericula-or-honkai-impact-3rd
|
ganlongnz/finetuning-sentiment-model-3000-samples_v1
|
ganlongnz
| 2023-09-03T08:51:40Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-09T10:23:26Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples_v1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87
- name: F1
type: f1
value: 0.8712871287128714
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3194
- Accuracy: 0.87
- F1: 0.8713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
922-Narra/tagalog-lm-lora-tests
|
922-Narra
| 2023-09-03T08:43:28Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-08-16T14:10:28Z |
---
license: openrail
---
Experimental Tagalog loras: safe or accurate outputs not guaranteed (not for production use)!
Note: better/best results with
* Prompting in Tagalog
* Using format "Human: (prompt)\nAssistant:"
Example:
"Ito ay isang chat log sa pagitan ng AI Assistant na nagta-Tagalog at isang Pilipino. Magsimula ng chat:\nHuman: Hello po?\nAssistant:"
# lt2_08162023
* Fine tuned on a small dataset of 14 items, manually edited
* 1 epoch (barely any noticable results)
* From chat LLaMA-2-7b
* Lora of chat-tagalog v0.1
# lt2_08162023a
* Fine tuned on a small dataset of 14 items, manually edited
* 20 epochs (more observable effects)
* From chat LLaMA-2-7b
* Lora of [chat-tagalog v0.1a](https://huggingface.co/922-Narra/llama-2-7b-chat-tagalog-v0.1a)
# lt2_08162023b
* Fine tuned on a small dataset of 14 items, manually edited
* 10 epochs
* From chat LLaMA-2-7b
* Lora of chat-tagalog v0.1b
# lt2_08162023c
* Fine tuned on a small dataset of 14 items, manually edited
* 50 epochs (overfitted)
* From chat LLaMA-2-7b
* Lora of chat-tagalog v0.1c
# lt2_08162023d
* Fine tuned on a small dataset of 14 items, manually edited
* 30 epochs (v0.1a further trained and cut-off before overfit)
* From chat LLaMA-2-7b
* Lora of [chat-tagalog v0.1d](https://huggingface.co/922-Narra/llama-2-7b-chat-tagalog-v0.1d)
# llama-2-7b-tagalog-v0.2 loras (08/26/2023)
* Fine tuned on dataset of ~10k items (mixed)
* 2/2a/2b fine-tuned for 1/2/3 epochs
* From chat LLaMA-2-7b
* Future attempt planned with cleaner chat/dialogue data
# hopia-3b-v0.1 (08/26/2023)
* Fine tuned on a small dataset of 14 items, manually edited
* 20 epochs
* From Open LLaMA 3b
# llama-2-7b-tagalog-v0.3 loras (09/01/2023)
* Fine tuned on a dataset of ~1k items (Tagalog-focused dataset, based off Tagalog sentences augmented by LLaMA-2-13b base to create a 3-turn dialogue dataset between Human and Assistant)
* 3/3a fine-tuned for 1/2 epochs
* From chat LLaMA-2-7b
* Experiment on partially synthetic data (and observing capability of LLaMA-2 base on generating Tagalog): will be further curating dataset
* Loras for [chat-tagalog v0.3)](https://huggingface.co/922-Narra/llama-2-7b-chat-tagalog-v0.3) and [chat-tagalog v0.3](https://huggingface.co/922-Narra/llama-2-7b-chat-tagalog-v0.3a)
# llama-2-7b-tagalog-v0.3WC2 (09/01/2023)
* Fine tuned on experimental dataset of ~6k items (Tagalog-focused dataset, based off Tagalog sentences and Wiki entries augmented by LLaMA-2-13b to create a dialogue-QnA dataset between Human and Assistant)
* 1 epoch
* From chat LLaMA-2-7b
# llama-2-13b-tagalog-v0.3 loras (09/01-02/2023)
* Fine tuned on experimental datasets of ~1k items (Tagalog-focused dataset, based off Tagalog sentences augmented by LLaMA-2-13b base to create a 3-turn dialogue dataset between Human and Assistant)
* 3 fine-tuned for 1 epoch, rank = 16, lora alpha = 32
* 3a with rank = 8
* 3b for 2 epochs
* 3c for 1 epoch, lr = 1e-4, warmup steps = 0.1
* 3d with lr = 2e-4, rank = 32, lora alpha = 64
* 3e for 2 epochs
* From LLaMA-2-13b
* Trying LLaMA-2-13b chat/other base and curated dataset for next attempts
|
TheBloke/robin-13B-v2-fp16
|
TheBloke
| 2023-09-03T08:38:16Z | 1,555 | 4 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-16T18:59:47Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# OptimalScale's Robin 13B v2 fp16
These files are pytorch format fp16 model files for [OptimalScale's Robin 13B v2](https://huggingface.co/OptimalScale/robin-13b-v2-delta).
It is the result of merging and/or converting the source repository to float16.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/robin-13B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/robin-13B-v2-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/robin-13B-v2-fp16)
## Prompt template
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions
###Human: prompt
###Assistant:
```
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: OptimalScale's Robin 13B v2
No model card provided in source repository.
|
dkqjrm/20230903121524
|
dkqjrm
| 2023-09-03T08:22:19Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-03T03:15:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230903121524'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230903121524
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9097
- Accuracy: 0.6442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 340 | 0.7286 | 0.5 |
| 0.7482 | 2.0 | 680 | 0.7273 | 0.5 |
| 0.7442 | 3.0 | 1020 | 0.7313 | 0.5 |
| 0.7442 | 4.0 | 1360 | 0.7599 | 0.5 |
| 0.7355 | 5.0 | 1700 | 0.7222 | 0.6113 |
| 0.6979 | 6.0 | 2040 | 0.7373 | 0.6160 |
| 0.6979 | 7.0 | 2380 | 0.6950 | 0.6583 |
| 0.6629 | 8.0 | 2720 | 0.6711 | 0.6740 |
| 0.6282 | 9.0 | 3060 | 0.7543 | 0.6599 |
| 0.6282 | 10.0 | 3400 | 0.7217 | 0.6520 |
| 0.6023 | 11.0 | 3740 | 0.7513 | 0.6426 |
| 0.5705 | 12.0 | 4080 | 0.6886 | 0.6693 |
| 0.5705 | 13.0 | 4420 | 0.6779 | 0.6755 |
| 0.5607 | 14.0 | 4760 | 0.7978 | 0.6489 |
| 0.527 | 15.0 | 5100 | 0.6722 | 0.6771 |
| 0.527 | 16.0 | 5440 | 0.8047 | 0.6317 |
| 0.5226 | 17.0 | 5780 | 0.7721 | 0.6740 |
| 0.5133 | 18.0 | 6120 | 0.7900 | 0.6552 |
| 0.5133 | 19.0 | 6460 | 0.7563 | 0.6599 |
| 0.5054 | 20.0 | 6800 | 0.8456 | 0.6411 |
| 0.4836 | 21.0 | 7140 | 0.8232 | 0.6426 |
| 0.4836 | 22.0 | 7480 | 0.7993 | 0.6270 |
| 0.4796 | 23.0 | 7820 | 0.8026 | 0.6426 |
| 0.4659 | 24.0 | 8160 | 0.8306 | 0.6254 |
| 0.4669 | 25.0 | 8500 | 0.8153 | 0.6505 |
| 0.4669 | 26.0 | 8840 | 0.8499 | 0.6489 |
| 0.4487 | 27.0 | 9180 | 0.8366 | 0.6332 |
| 0.4499 | 28.0 | 9520 | 0.7661 | 0.6567 |
| 0.4499 | 29.0 | 9860 | 0.7668 | 0.6630 |
| 0.4483 | 30.0 | 10200 | 0.8147 | 0.6520 |
| 0.4303 | 31.0 | 10540 | 0.8030 | 0.6442 |
| 0.4303 | 32.0 | 10880 | 0.8346 | 0.6285 |
| 0.4272 | 33.0 | 11220 | 0.7779 | 0.6489 |
| 0.43 | 34.0 | 11560 | 0.8193 | 0.6599 |
| 0.43 | 35.0 | 11900 | 0.8792 | 0.6411 |
| 0.4139 | 36.0 | 12240 | 0.8091 | 0.6332 |
| 0.4139 | 37.0 | 12580 | 0.7939 | 0.6458 |
| 0.4139 | 38.0 | 12920 | 0.8626 | 0.6505 |
| 0.4102 | 39.0 | 13260 | 0.8111 | 0.6442 |
| 0.4065 | 40.0 | 13600 | 0.8054 | 0.6583 |
| 0.4065 | 41.0 | 13940 | 0.8704 | 0.6520 |
| 0.4049 | 42.0 | 14280 | 0.8441 | 0.6348 |
| 0.3978 | 43.0 | 14620 | 0.8723 | 0.6411 |
| 0.3978 | 44.0 | 14960 | 0.8747 | 0.6552 |
| 0.4074 | 45.0 | 15300 | 0.8662 | 0.6505 |
| 0.3952 | 46.0 | 15640 | 0.8432 | 0.6442 |
| 0.3952 | 47.0 | 15980 | 0.8837 | 0.6552 |
| 0.3868 | 48.0 | 16320 | 0.8219 | 0.6583 |
| 0.3805 | 49.0 | 16660 | 0.7792 | 0.6536 |
| 0.386 | 50.0 | 17000 | 0.8385 | 0.6520 |
| 0.386 | 51.0 | 17340 | 0.8554 | 0.6505 |
| 0.3869 | 52.0 | 17680 | 0.8655 | 0.6583 |
| 0.3772 | 53.0 | 18020 | 0.8613 | 0.6552 |
| 0.3772 | 54.0 | 18360 | 0.9268 | 0.6364 |
| 0.3744 | 55.0 | 18700 | 0.8710 | 0.6473 |
| 0.378 | 56.0 | 19040 | 0.9222 | 0.6395 |
| 0.378 | 57.0 | 19380 | 0.8803 | 0.6536 |
| 0.3702 | 58.0 | 19720 | 0.9055 | 0.6364 |
| 0.3687 | 59.0 | 20060 | 0.8305 | 0.6630 |
| 0.3687 | 60.0 | 20400 | 0.9229 | 0.6395 |
| 0.3677 | 61.0 | 20740 | 0.9214 | 0.6301 |
| 0.3635 | 62.0 | 21080 | 0.9074 | 0.6458 |
| 0.3635 | 63.0 | 21420 | 0.8890 | 0.6520 |
| 0.3613 | 64.0 | 21760 | 0.8725 | 0.6426 |
| 0.3634 | 65.0 | 22100 | 0.8860 | 0.6489 |
| 0.3634 | 66.0 | 22440 | 0.8428 | 0.6614 |
| 0.3528 | 67.0 | 22780 | 0.8792 | 0.6458 |
| 0.3613 | 68.0 | 23120 | 0.8840 | 0.6254 |
| 0.3613 | 69.0 | 23460 | 0.8960 | 0.6489 |
| 0.3516 | 70.0 | 23800 | 0.8763 | 0.6567 |
| 0.348 | 71.0 | 24140 | 0.8935 | 0.6332 |
| 0.348 | 72.0 | 24480 | 0.9031 | 0.6442 |
| 0.3567 | 73.0 | 24820 | 0.9070 | 0.6458 |
| 0.3514 | 74.0 | 25160 | 0.8997 | 0.6426 |
| 0.3543 | 75.0 | 25500 | 0.9025 | 0.6458 |
| 0.3543 | 76.0 | 25840 | 0.9028 | 0.6379 |
| 0.3457 | 77.0 | 26180 | 0.9155 | 0.6364 |
| 0.3452 | 78.0 | 26520 | 0.8973 | 0.6426 |
| 0.3452 | 79.0 | 26860 | 0.9085 | 0.6458 |
| 0.3379 | 80.0 | 27200 | 0.9097 | 0.6442 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
victornica/molgpt_selfies_mosesonly
|
victornica
| 2023-09-03T08:14:06Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-03T04:51:02Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: molgpt_selfies_mosesonly
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# molgpt_selfies_mosesonly
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1282 | 0.18 | 1000 | 0.7807 |
| 0.7302 | 0.36 | 2000 | 0.6754 |
| 0.6658 | 0.54 | 3000 | 0.6378 |
| 0.6381 | 0.72 | 4000 | 0.6180 |
| 0.6208 | 0.9 | 5000 | 0.6067 |
| 0.6072 | 1.08 | 6000 | 0.5968 |
| 0.5973 | 1.26 | 7000 | 0.5859 |
| 0.5897 | 1.44 | 8000 | 0.5795 |
| 0.5837 | 1.62 | 9000 | 0.5724 |
| 0.5778 | 1.79 | 10000 | 0.5683 |
| 0.5729 | 1.97 | 11000 | 0.5639 |
| 0.5664 | 2.15 | 12000 | 0.5613 |
| 0.5621 | 2.33 | 13000 | 0.5555 |
| 0.5592 | 2.51 | 14000 | 0.5520 |
| 0.5552 | 2.69 | 15000 | 0.5481 |
| 0.5524 | 2.87 | 16000 | 0.5449 |
| 0.5474 | 3.05 | 17000 | 0.5420 |
| 0.5426 | 3.23 | 18000 | 0.5397 |
| 0.5405 | 3.41 | 19000 | 0.5369 |
| 0.538 | 3.59 | 20000 | 0.5338 |
| 0.5353 | 3.77 | 21000 | 0.5307 |
| 0.5329 | 3.95 | 22000 | 0.5283 |
| 0.5266 | 4.13 | 23000 | 0.5264 |
| 0.5237 | 4.31 | 24000 | 0.5236 |
| 0.522 | 4.49 | 25000 | 0.5218 |
| 0.5206 | 4.67 | 26000 | 0.5198 |
| 0.5191 | 4.85 | 27000 | 0.5182 |
| 0.5165 | 5.03 | 28000 | 0.5168 |
| 0.5113 | 5.21 | 29000 | 0.5159 |
| 0.5104 | 5.38 | 30000 | 0.5150 |
| 0.5105 | 5.56 | 31000 | 0.5143 |
| 0.5098 | 5.74 | 32000 | 0.5140 |
| 0.5094 | 5.92 | 33000 | 0.5139 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
JunF1122/xlm-roberta-base-finetuned-panx-de
|
JunF1122
| 2023-09-03T08:05:41Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-02T14:26:16Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.863220155832338
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1352
- F1: 0.8632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2578 | 1.0 | 525 | 0.1642 | 0.8263 |
| 0.1289 | 2.0 | 1050 | 0.1397 | 0.8420 |
| 0.0819 | 3.0 | 1575 | 0.1352 | 0.8632 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
urbija/ner-bio-annotated-4
|
urbija
| 2023-09-03T07:55:58Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-28T17:01:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ner-bio-annotated-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner-bio-annotated-4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1253
- Precision: 0.7316
- Recall: 0.7846
- F1: 0.7572
- Accuracy: 0.9640
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 67 | 0.1690 | 0.5398 | 0.6195 | 0.5769 | 0.9422 |
| No log | 2.0 | 134 | 0.1422 | 0.6725 | 0.7493 | 0.7089 | 0.9562 |
| No log | 3.0 | 201 | 0.1253 | 0.7316 | 0.7846 | 0.7572 | 0.9640 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0+cpu
- Datasets 2.1.0
- Tokenizers 0.13.3
|
AndrewL088/Pyramids
|
AndrewL088
| 2023-09-03T07:31:43Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-09-03T07:14:25Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: AndrewL088/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
chunwoolee0/ke_t5_base_bongsoo_en_ko
|
chunwoolee0
| 2023-09-03T07:20:59Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:KETI-AIR/ke-t5-base",
"base_model:finetune:KETI-AIR/ke-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-25T00:50:28Z |
---
license: apache-2.0
base_model: KETI-AIR/ke-t5-base
tags:
- generated_from_trainer
model-index:
- name: ke_t5_base_bongsoo_en_ko
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ke_t5_base_bongsoo_en_ko
This model is a fine-tuned version of [KETI-AIR/ke-t5-base](https://huggingface.co/KETI-AIR/ke-t5-base)
on the [bongsoo/news_news_talk_en_ko](https://huggingface.co/datasets/bongsoo/news_talk_en_ko) dataset.
See [translation_ke_t5_base_bongsoo_en_ko.ipynb](https://github.com/chunwoolee0/ko-nlp/blob/main/translation_ke_t5_base_bongsoo_en_ko.ipynb)
## Model description
KE-T5 is a pretrained-model of t5 text-to-text transfer transformers using the Korean and English corpus developed by KETI (한국전자연구원).
The vocabulary used by KE-T5 consists of 64,000 sub-word tokens and was created using Google's sentencepiece. The Sentencepiece model was trained to cover 99.95% of a 30GB corpus with an approximate 7:3 mix of Korean and English.
## Intended uses & limitations
Translation from English to Korean
## Usage
You can use this model directly with a pipeline for translation language modeling:
```python
>>> from transformers import pipeline
>>> translator = pipeline('translation', model='chunwoolee0/ke_t5_base_bongsoo_en_ko')
>>> translator("Let us go for a walk after lunch.")
[{'translation_text': '점심을 마치고 산책을 하러 가자.'}]
>>> translator("The BRICS countries welcomed six new members from three different continents on Thursday.")
[{'translation_text': '브릭스 국가들은 지난 24일 3개 대륙 6명의 신규 회원을 환영했다.'}]
>>> translator("The BRICS countries welcomed six new members from three different continents on Thursday, marking a historic milestone that underscored the solidarity of BRICS and developing countries and determination to work together for a better future, officials and experts said.",max_length=400)
[{'translation_text': '브렙스 국가는 지난 7일 3개 대륙 6명의 신규 회원을 환영하며 BRICS와 개발도상국의 연대와 더 나은 미래를 위해 함께 노력하겠다는 의지를 재확인한 역사적인 이정표를 장식했다고 관계자들과 전문가들은 전했다.'}]
>>> translator("Biden’s decree zaps lucrative investments in China’s chip and AI sectors")
[{'translation_text': '바이든 장관의 행정명령은 중국 칩과 AI 분야의 고수익 투자를 옥죄는 것이다.'}]
>>> translator("It is most likely that China’s largest chip foundry, a key piece of the puzzle in Beijing’s efforts to achieve greater self-sufficiency in semiconductors, would not have been able to set up its first plant in Shanghai’s suburbs in the early 2000s without funding from American investors such as Walden International and Goldman Sachs.", max_length=400)
[{'translation_text': '반도체의 더 큰 자립성을 이루기 위해 베이징이 애쓰는 퍼즐의 핵심 조각인 중국 최대 칩 파운드리가 월덴인터내셔널, 골드만삭스 등 미국 투자자로부터 자금 지원을 받지 못한 채 2000년대 초 상하이 시내에 첫 공장을 지을 수 없었을 가능성이 크다.'}]
## Training and evaluation data
One third of the original training data size of 1200000 is selected because of the resource limit of the colab of google.
## Training procedure
Because of the limitation of google's colab, the model is trained only by one epoch. The result is still quite satisfactory. The quality of translation is not so bad.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 5625 | 2.4075 | 8.2272 |
- cpu usage: 4.8/12.7GB
- gpu usage: 13.0/15.0GB
- running time: 3h
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
venetis/bert-base-uncased-finetuned-3d-sentiment
|
venetis
| 2023-09-03T06:51:11Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-02T23:52:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-base-uncased-finetuned-3d-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-3d-sentiment
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9271
- Accuracy: 0.7392
- Precision: 0.7455
- Recall: 0.7392
- F1: 0.7394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 6381
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.8443 | 1.0 | 1595 | 0.8265 | 0.6659 | 0.6920 | 0.6659 | 0.6629 |
| 0.6037 | 2.0 | 3190 | 0.7380 | 0.7021 | 0.7207 | 0.7021 | 0.7014 |
| 0.516 | 3.0 | 4785 | 0.6740 | 0.7246 | 0.7337 | 0.7246 | 0.7234 |
| 0.4269 | 4.0 | 6380 | 0.7221 | 0.7290 | 0.7383 | 0.7290 | 0.7271 |
| 0.3149 | 5.0 | 7975 | 0.8368 | 0.7237 | 0.7422 | 0.7237 | 0.7230 |
| 0.1996 | 6.0 | 9570 | 0.9271 | 0.7392 | 0.7455 | 0.7392 | 0.7394 |
| 0.1299 | 7.0 | 11165 | 1.1062 | 0.7358 | 0.7461 | 0.7358 | 0.7361 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.3
|
tnguyen9210/q-Taxi-v3
|
tnguyen9210
| 2023-09-03T06:45:57Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-03T06:45:54Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.46 +/- 2.80
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="tnguyen9210/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ghorbani/irangig
|
ghorbani
| 2023-09-03T06:41:05Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-09-03T06:41:05Z |
---
license: bigscience-openrail-m
---
|
NavpreetSingh54/my-pet-dog-xzg
|
NavpreetSingh54
| 2023-09-03T06:13:33Z | 6 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-03T06:00:19Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-XZG Dreambooth model trained by NavpreetSingh54 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: IKGPTU126
Sample pictures of this concept:
.jpeg)

.jpeg)
.jpeg)
.jpeg)

|
s3nh/sakuraumi-Sakura-13B-Galgame-GGUF
|
s3nh
| 2023-09-03T06:05:30Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-03T06:05:29Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/sakuraumi/Sakura-13B-Galgame).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
chunwoolee0/mt5_small_bongsoo_en_ko
|
chunwoolee0
| 2023-09-03T05:42:48Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:chunwoolee0/mt5_small_bongsoo_en_ko",
"base_model:finetune:chunwoolee0/mt5_small_bongsoo_en_ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-24T11:45:41Z |
---
license: apache-2.0
base_model: chunwoolee0/mt5_small_bongsoo_en_ko
tags:
- generated_from_trainer
metrics:
- rouge
- sacrebleu
model-index:
- name: mt5_small_bongsoo_en_ko
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5_small_bongsoo_en_ko
This model is a fine-tuned version of [chunwoolee0/mt5_small_bongsoo_en_ko](https://huggingface.co/chunwoolee0/mt5_small_bongsoo_en_ko)
on the [bongsoo/news_talk_en_ko](https://huggingface.co/datasets/bongsoo/news_talk_en_ko) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7805
- Rouge1: 0.1932
- Rouge2: 0.0394
- Rougel: 0.1895
- Sacrebleu: 0.4518
## Model description
mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset
covering 101 languages
## Intended uses & limitations
Translation from English to Korean
## Usage
You can use this model directly with a pipeline for translation language modeling:
```python
>>> from transformers import pipeline
>>> translator = pipeline('translation', model='chunwoolee0/ke_t5_base_bongsoo_en_ko')
>>> translator("Let us go for a walk after lunch.")
[{'translation_text': '식당에 앉아서 밤에 갔다.'}]
>>> translator("Skinner's reward is mostly eye-watering.")
[{'translation_text': '벤더의 선물은 너무 마음이 쏠린다.'}]
```
## Training and evaluation data
The value of max_length is critical to the training. The usual value of 128 used for Indo-European languages causes a
greate trouble in gpu usage. Therefore it should be reduced to 64 in order to succeed.
Another problem comes from the usual split of data into 80% for train and 20% for validation.
By this, the evaluation
step takes too much time. Here 99% and 1% split is used without change in the evaluation.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Sacrebleu |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 3.8338 | 0.16 | 500 | 2.9626 | 0.1475 | 0.0184 | 0.1455 | 0.4243 |
| 3.7865 | 0.32 | 1000 | 2.9305 | 0.1529 | 0.0181 | 0.1508 | 0.4435 |
| 3.7436 | 0.48 | 1500 | 2.9067 | 0.1572 | 0.019 | 0.155 | 0.4464 |
| 3.7207 | 0.65 | 2000 | 2.8924 | 0.165 | 0.0233 | 0.1629 | 0.4532 |
| 3.7022 | 0.81 | 2500 | 2.8825 | 0.1647 | 0.0231 | 0.1627 | 0.4504 |
| 3.69 | 0.97 | 3000 | 2.8778 | 0.1662 | 0.0237 | 0.1647 | 0.4694 |
The mT5 model of google cannot be used for Korean although it is trained over 101 languages. Finetuning
using very large data set such as bongsoo/news_talk_en_ko still yield garbage.
Since GPU memories allowed for free use in colab are greatly limited, repeated fine-tunings for the split datasets are performed
to obtain better results. Theoretically, this might give better results. But actual attempts fail to yield
better results. Instead, the results become worse. One should use other
models like the ke-t5 by KETI(한국전자연구원).
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
guidoivetta/lacan
|
guidoivetta
| 2023-09-03T05:29:46Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-03T05:22:48Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: lacan
results: []
widget:
- text: "Freud designates for us"
example_title: "Freud"
- text: "Power is defined as"
example_title: "Power"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lacan
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.4317
- eval_runtime: 11.3322
- eval_samples_per_second: 87.538
- eval_steps_per_second: 10.942
- epoch: 6.0
- step: 12066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Aswesay/Test_01
|
Aswesay
| 2023-09-03T05:26:07Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-03T05:26:07Z |
---
license: creativeml-openrail-m
---
|
amir36/langchain_adapter
|
amir36
| 2023-09-03T05:13:55Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-03T05:13:51Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
bigmorning/whisper_syl_noforce_nostart__0020
|
bigmorning
| 2023-09-03T04:56:17Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-03T04:56:08Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_noforce_nostart__0020
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_noforce_nostart__0020
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.7583
- Train Accuracy: 0.0173
- Train Wermet: 0.5911
- Validation Loss: 2.6383
- Validation Accuracy: 0.0139
- Validation Wermet: 0.6695
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6298 | 0.0091 | 1.6176 | 4.3084 | 0.0092 | 1.0203 | 0 |
| 4.9271 | 0.0098 | 0.8937 | 4.1324 | 0.0099 | 0.9075 | 1 |
| 4.6878 | 0.0106 | 0.8360 | 3.9151 | 0.0102 | 0.9003 | 2 |
| 4.4454 | 0.0113 | 0.8275 | 3.7558 | 0.0106 | 0.8730 | 3 |
| 4.2497 | 0.0119 | 0.8211 | 3.6019 | 0.0110 | 0.8640 | 4 |
| 4.0917 | 0.0123 | 0.8067 | 3.5363 | 0.0111 | 0.8512 | 5 |
| 3.9616 | 0.0127 | 0.7864 | 3.4492 | 0.0113 | 0.8432 | 6 |
| 3.8575 | 0.0130 | 0.7742 | 3.3963 | 0.0113 | 0.8414 | 7 |
| 3.7605 | 0.0133 | 0.7580 | 3.3430 | 0.0115 | 0.8197 | 8 |
| 3.6756 | 0.0136 | 0.7447 | 3.2872 | 0.0117 | 0.8071 | 9 |
| 3.6021 | 0.0138 | 0.7370 | 3.2828 | 0.0117 | 0.8165 | 10 |
| 3.5237 | 0.0140 | 0.7218 | 3.2439 | 0.0118 | 0.8088 | 11 |
| 3.4558 | 0.0143 | 0.7105 | 3.2063 | 0.0120 | 0.7890 | 12 |
| 3.3853 | 0.0145 | 0.6993 | 3.1702 | 0.0120 | 0.8035 | 13 |
| 3.3101 | 0.0148 | 0.6870 | 3.1144 | 0.0123 | 0.7605 | 14 |
| 3.2314 | 0.0152 | 0.6719 | 3.0522 | 0.0125 | 0.7481 | 15 |
| 3.1430 | 0.0155 | 0.6575 | 2.9911 | 0.0127 | 0.7378 | 16 |
| 3.0392 | 0.0160 | 0.6369 | 2.9249 | 0.0129 | 0.7357 | 17 |
| 2.9134 | 0.0166 | 0.6148 | 2.7883 | 0.0134 | 0.6909 | 18 |
| 2.7583 | 0.0173 | 0.5911 | 2.6383 | 0.0139 | 0.6695 | 19 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
rrozb/SnowballTarget1
|
rrozb
| 2023-09-03T04:45:35Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-09-03T04:45:32Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: rrozb/SnowballTarget1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Joemother4/Garfield.zip
|
Joemother4
| 2023-09-03T04:40:22Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-09-03T04:40:22Z |
---
license: bigscience-openrail-m
---
|
bigmorning/whisper_syl_noforce_nostart__0010
|
bigmorning
| 2023-09-03T04:29:46Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-03T04:29:38Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_noforce_nostart__0010
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_noforce_nostart__0010
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.6756
- Train Accuracy: 0.0136
- Train Wermet: 0.7447
- Validation Loss: 3.2872
- Validation Accuracy: 0.0117
- Validation Wermet: 0.8071
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6298 | 0.0091 | 1.6176 | 4.3084 | 0.0092 | 1.0203 | 0 |
| 4.9271 | 0.0098 | 0.8937 | 4.1324 | 0.0099 | 0.9075 | 1 |
| 4.6878 | 0.0106 | 0.8360 | 3.9151 | 0.0102 | 0.9003 | 2 |
| 4.4454 | 0.0113 | 0.8275 | 3.7558 | 0.0106 | 0.8730 | 3 |
| 4.2497 | 0.0119 | 0.8211 | 3.6019 | 0.0110 | 0.8640 | 4 |
| 4.0917 | 0.0123 | 0.8067 | 3.5363 | 0.0111 | 0.8512 | 5 |
| 3.9616 | 0.0127 | 0.7864 | 3.4492 | 0.0113 | 0.8432 | 6 |
| 3.8575 | 0.0130 | 0.7742 | 3.3963 | 0.0113 | 0.8414 | 7 |
| 3.7605 | 0.0133 | 0.7580 | 3.3430 | 0.0115 | 0.8197 | 8 |
| 3.6756 | 0.0136 | 0.7447 | 3.2872 | 0.0117 | 0.8071 | 9 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
albagon/Reinforce-CartPole-v1
|
albagon
| 2023-09-03T04:27:52Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-03T04:27:43Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
thirosh0520/detr-resnet-50_finetuned-room-objects
|
thirosh0520
| 2023-09-03T04:11:38Z | 160 | 1 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-09-02T18:26:27Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: detr-resnet-50_finetuned-room-objects
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned-room-objects
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cpu
- Datasets 2.14.4
- Tokenizers 0.13.3
|
kyungmin011029/category_0903
|
kyungmin011029
| 2023-09-03T04:05:23Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:klue/bert-base",
"base_model:finetune:klue/bert-base",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-03T04:03:23Z |
---
license: cc-by-sa-4.0
base_model: klue/bert-base
tags:
- generated_from_keras_callback
model-index:
- name: category_0903
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# category_0903
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
johaanm/test-planner-alpha-V6.2
|
johaanm
| 2023-09-03T04:05:07Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-03T04:05:03Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
kyungmin011029/code_0903
|
kyungmin011029
| 2023-09-03T04:04:32Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:klue/bert-base",
"base_model:finetune:klue/bert-base",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-03T04:03:22Z |
---
license: cc-by-sa-4.0
base_model: klue/bert-base
tags:
- generated_from_keras_callback
model-index:
- name: code_0903
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# code_0903
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
Shawt/uu
|
Shawt
| 2023-09-03T04:04:17Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-09-03T04:03:26Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Abbood/stable-diff-abdul
|
Abbood
| 2023-09-03T03:58:08Z | 1 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-09-03T03:58:05Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of AR
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
flytech/insa-large
|
flytech
| 2023-09-03T03:57:16Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:openai-community/gpt2-large",
"base_model:finetune:openai-community/gpt2-large",
"license:mit",
"region:us"
] | null | 2023-09-02T15:46:00Z |
---
license: mit
base_model: gpt2-large
tags:
- generated_from_trainer
model-index:
- name: insa-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# insa-large
This model is a fine-tuned version of [gpt2-large](https://huggingface.co/gpt2-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5478 | 2.0 | 1000 | 1.4349 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
yotoshihiro/a2c-PandaReachDense-v2
|
yotoshihiro
| 2023-09-03T03:38:25Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-21T08:30:56Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.34 +/- 0.68
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687)
|
monsoon-nlp/bert-base-thai
|
monsoon-nlp
| 2023-09-03T03:33:31Z | 788 | 12 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"feature-extraction",
"th",
"arxiv:1609.08144",
"arxiv:1508.07909",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: th
---
# BERT-th
Adapted from https://github.com/ThAIKeras/bert for HuggingFace/Transformers library
## Pre-tokenization
You must run the original ThaiTokenizer to have your tokenization match that of the original model.
If you skip this step, you will not do much better than
mBERT or random chance!
[Refer to this CoLab notebook](https://colab.research.google.com/drive/1Ax9OsbTPwBBP1pJx1DkYwtgKILcj3Ur5?usp=sharing)
or follow these steps:
```bash
pip install pythainlp six sentencepiece python-crfsuite
git clone https://github.com/ThAIKeras/bert
# download .vocab and .model files from ThAIKeras/bert > Tokenization section
```
Or from [.vocab](https://raw.githubusercontent.com/jitkapat/thaipostagger/master/th.wiki.bpe.op25000.vocab)
and [.model](https://raw.githubusercontent.com/jitkapat/thaipostagger/master/th.wiki.bpe.op25000.model) links.
Then set up ThaiTokenizer class - this is modified slightly to
remove a TensorFlow dependency.
```python
import collections
import unicodedata
import six
def convert_to_unicode(text):
"""Converts `text` to Unicode (if it's not already), assuming utf-8 input."""
if six.PY3:
if isinstance(text, str):
return text
elif isinstance(text, bytes):
return text.decode("utf-8", "ignore")
else:
raise ValueError("Unsupported string type: %s" % (type(text)))
elif six.PY2:
if isinstance(text, str):
return text.decode("utf-8", "ignore")
elif isinstance(text, unicode):
return text
else:
raise ValueError("Unsupported string type: %s" % (type(text)))
else:
raise ValueError("Not running on Python2 or Python 3?")
def load_vocab(vocab_file):
vocab = collections.OrderedDict()
index = 0
with open(vocab_file, "r") as reader:
while True:
token = reader.readline()
if token.split(): token = token.split()[0] # to support SentencePiece vocab file
token = convert_to_unicode(token)
if not token:
break
token = token.strip()
vocab[token] = index
index += 1
return vocab
#####
from bert.bpe_helper import BPE
import sentencepiece as spm
def convert_by_vocab(vocab, items):
output = []
for item in items:
output.append(vocab[item])
return output
class ThaiTokenizer(object):
"""Tokenizes Thai texts."""
def __init__(self, vocab_file, spm_file):
self.vocab = load_vocab(vocab_file)
self.inv_vocab = {v: k for k, v in self.vocab.items()}
self.bpe = BPE(vocab_file)
self.s = spm.SentencePieceProcessor()
self.s.Load(spm_file)
def tokenize(self, text):
bpe_tokens = self.bpe.encode(text).split(' ')
spm_tokens = self.s.EncodeAsPieces(text)
tokens = bpe_tokens if len(bpe_tokens) < len(spm_tokens) else spm_tokens
split_tokens = []
for token in tokens:
new_token = token
if token.startswith('_') and not token in self.vocab:
split_tokens.append('_')
new_token = token[1:]
if not new_token in self.vocab:
split_tokens.append('<unk>')
else:
split_tokens.append(new_token)
return split_tokens
def convert_tokens_to_ids(self, tokens):
return convert_by_vocab(self.vocab, tokens)
def convert_ids_to_tokens(self, ids):
return convert_by_vocab(self.inv_vocab, ids)
```
Then pre-tokenizing your own text:
```python
from pythainlp import sent_tokenize
tokenizer = ThaiTokenizer(vocab_file='th.wiki.bpe.op25000.vocab', spm_file='th.wiki.bpe.op25000.model')
txt = "กรุงเทพมหานครเป็นเขตปกครองพิเศษของประเทศไทย มิได้มีสถานะเป็นจังหวัด คำว่า \"กรุงเทพมหานคร\" นั้นยังใช้เรียกองค์กรปกครองส่วนท้องถิ่นของกรุงเทพมหานครอีกด้วย"
split_sentences = sent_tokenize(txt)
print(split_sentences)
"""
['กรุงเทพมหานครเป็นเขตปกครองพิเศษของประเทศไทย ',
'มิได้มีสถานะเป็นจังหวัด ',
'คำว่า "กรุงเทพมหานคร" นั้นยังใช้เรียกองค์กรปกครองส่วนท้องถิ่นของกรุงเทพมหานครอีกด้วย']
"""
split_words = ' '.join(tokenizer.tokenize(' '.join(split_sentences)))
print(split_words)
"""
'▁กรุงเทพมหานคร เป็นเขต ปกครอง พิเศษ ของประเทศไทย ▁มิ ได้มี สถานะเป็น จังหวัด ▁คําว่า ▁" กรุงเทพมหานคร " ▁นั้น...' # continues
"""
```
Original README follows:
---
Google's [**BERT**](https://github.com/google-research/bert) is currently the state-of-the-art method of pre-training text representations which additionally provides multilingual models. ~~Unfortunately, Thai is the only one in 103 languages that is excluded due to difficulties in word segmentation.~~
BERT-th presents the Thai-only pre-trained model based on the BERT-Base structure. It is now available to download.
* **[`BERT-Base, Thai`](https://drive.google.com/open?id=1J3uuXZr_Se_XIFHj7zlTJ-C9wzI9W_ot)**: BERT-Base architecture, Thai-only model
BERT-th also includes relevant codes and scripts along with the pre-trained model, all of which are the modified versions of those in the original BERT project.
## Preprocessing
### Data Source
Training data for BERT-th come from [the latest article dump of Thai Wikipedia](https://dumps.wikimedia.org/thwiki/latest/thwiki-latest-pages-articles.xml.bz2) on November 2, 2018. The raw texts are extracted by using [WikiExtractor](https://github.com/attardi/wikiextractor).
### Sentence Segmentation
Input data need to be segmented into separate sentences before further processing by BERT modules. Since Thai language has no explicit marker at the end of a sentence, it is quite problematic to pinpoint sentence boundaries. To the best of our knowledge, there is still no implementation of Thai sentence segmentation elsewhere. So, in this project, sentence segmentation is done by applying simple heuristics, considering spaces, sentence length and common conjunctions.
After preprocessing, the training corpus consists of approximately 2 million sentences and 40 million words (counting words after word segmentation by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)). The plain and segmented texts can be downloaded **[`here`](https://drive.google.com/file/d/1QZSOpikO6Qc02gRmyeb_UiRLtTmUwGz1/view?usp=sharing)**.
## Tokenization
BERT uses [WordPiece](https://arxiv.org/pdf/1609.08144.pdf) as a tokenization mechanism. But it is Google internal, we cannot apply existing Thai word segmentation and then utilize WordPiece to learn the set of subword units. The best alternative is [SentencePiece](https://github.com/google/sentencepiece) which implements [BPE](https://arxiv.org/abs/1508.07909) and needs no word segmentation.
In this project, we adopt a pre-trained Thai SentencePiece model from [BPEmb](https://github.com/bheinzerling/bpemb). The model of 25000 vocabularies is chosen and the vocabulary file has to be augmented with BERT's special characters, including '[PAD]', '[CLS]', '[SEP]' and '[MASK]'. The model and vocabulary files can be downloaded **[`here`](https://drive.google.com/file/d/1F7pCgt3vPlarI9RxKtOZUrC_67KMNQ1W/view?usp=sharing)**.
`SentencePiece` and `bpe_helper.py` from BPEmb are both used to tokenize data. `ThaiTokenizer class` has been added to BERT's `tokenization.py` for tokenizing Thai texts.
## Pre-training
The data can be prepared before pre-training by using this script.
```shell
export BPE_DIR=/path/to/bpe
export TEXT_DIR=/path/to/text
export DATA_DIR=/path/to/data
python create_pretraining_data.py \
--input_file=$TEXT_DIR/thaiwikitext_sentseg \
--output_file=$DATA_DIR/tf_examples.tfrecord \
--vocab_file=$BPE_DIR/th.wiki.bpe.op25000.vocab \
--max_seq_length=128 \
--max_predictions_per_seq=20 \
--masked_lm_prob=0.15 \
--random_seed=12345 \
--dupe_factor=5 \
--thai_text=True \
--spm_file=$BPE_DIR/th.wiki.bpe.op25000.model
```
Then, the following script can be run to learn a model from scratch.
```shell
export DATA_DIR=/path/to/data
export BERT_BASE_DIR=/path/to/bert_base
python run_pretraining.py \
--input_file=$DATA_DIR/tf_examples.tfrecord \
--output_dir=$BERT_BASE_DIR \
--do_train=True \
--do_eval=True \
--bert_config_file=$BERT_BASE_DIR/bert_config.json \
--train_batch_size=32 \
--max_seq_length=128 \
--max_predictions_per_seq=20 \
--num_train_steps=1000000 \
--num_warmup_steps=100000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=200000
```
We have trained the model for 1 million steps. On Tesla K80 GPU, it took around 20 days to complete. Though, we provide a snapshot at 0.8 million steps because it yields better results for downstream classification tasks.
## Downstream Classification Tasks
### XNLI
[XNLI](http://www.nyu.edu/projects/bowman/xnli/) is a dataset for evaluating a cross-lingual inferential classification task. The development and test sets contain 15 languages which data are thoroughly edited. The machine-translated versions of training data are also provided.
The Thai-only pre-trained BERT model can be applied to the XNLI task by using training data which are translated to Thai. Spaces between words in the training data need to be removed to make them consistent with inputs in the pre-training step. The processed files of XNLI related to Thai language can be downloaded **[`here`](https://drive.google.com/file/d/1ZAk1JfR6a0TSCkeyQ-EkRtk1w_mQDWFG/view?usp=sharing)**.
Afterwards, the XNLI task can be learned by using this script.
```shell
export BPE_DIR=/path/to/bpe
export XNLI_DIR=/path/to/xnli
export OUTPUT_DIR=/path/to/output
export BERT_BASE_DIR=/path/to/bert_base
python run_classifier.py \
--task_name=XNLI \
--do_train=true \
--do_eval=true \
--data_dir=$XNLI_DIR \
--vocab_file=$BPE_DIR/th.wiki.bpe.op25000.vocab \
--bert_config_file=$BERT_BASE_DIR/bert_config.json \
--init_checkpoint=$BERT_BASE_DIR/model.ckpt \
--max_seq_length=128 \
--train_batch_size=32 \
--learning_rate=5e-5 \
--num_train_epochs=2.0 \
--output_dir=$OUTPUT_DIR \
--xnli_language=th \
--spm_file=$BPE_DIR/th.wiki.bpe.op25000.model
```
This table compares the Thai-only model with XNLI baselines and the Multilingual Cased model which is also trained by using translated data.
<!-- Use html table because github markdown doesn't support colspan -->
<table>
<tr>
<td colspan="2" align="center"><b>XNLI Baseline</b></td>
<td colspan="2" align="center"><b>BERT</b></td>
</tr>
<tr>
<td align="center">Translate Train</td>
<td align="center">Translate Test</td>
<td align="center">Multilingual Model</td>
<td align="center">Thai-only Model</td>
</tr>
<td align="center">62.8</td>
<td align="center">64.4</td>
<td align="center">66.1</td>
<td align="center"><b>68.9</b></td>
</table>
### Wongnai Review Dataset
Wongnai Review Dataset collects restaurant reviews and ratings from [Wongnai](https://www.wongnai.com/) website. The task is to classify a review into one of five ratings (1 to 5 stars). The dataset can be downloaded **[`here`](https://github.com/wongnai/wongnai-corpus)** and the following script can be run to use the Thai-only model for this task.
```shell
export BPE_DIR=/path/to/bpe
export WONGNAI_DIR=/path/to/wongnai
export OUTPUT_DIR=/path/to/output
export BERT_BASE_DIR=/path/to/bert_base
python run_classifier.py \
--task_name=wongnai \
--do_train=true \
--do_predict=true \
--data_dir=$WONGNAI_DIR \
--vocab_file=$BPE_DIR/th.wiki.bpe.op25000.vocab \
--bert_config_file=$BERT_BASE_DIR/bert_config.json \
--init_checkpoint=$BERT_BASE_DIR/model.ckpt \
--max_seq_length=128 \
--train_batch_size=32 \
--learning_rate=5e-5 \
--num_train_epochs=2.0 \
--output_dir=$OUTPUT_DIR \
--spm_file=$BPE_DIR/th.wiki.bpe.op25000.model
```
Without additional preprocessing and further fine-tuning, the Thai-only BERT model can achieve 0.56612 and 0.57057 for public and private test-set scores respectively.
|
monsoon-nlp/ar-seq2seq-gender-decoder
|
monsoon-nlp
| 2023-09-03T03:30:13Z | 60 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-generation",
"ar",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: ar
---
# ar-seq2seq-gender (decoder)
This is a seq2seq model (decoder half) to "flip" gender in **first-person** Arabic sentences.
The model can augment your existing Arabic data, or generate counterfactuals
to test a model's decisions (would changing the gender of the subject or speaker change output?).
Intended Examples:
- 'أنا سعيد' <=> 'انا سعيدة'
- 'ركض إلى المتجر' <=> 'ركضت إلى المتجر'
People's names, gender pronouns, gendered words (father, mother), and many other values are currently unchanged by this model. Future versions may be trained on more data.
## Sample Code
```
import torch
from transformers import AutoTokenizer, EncoderDecoderModel
model = EncoderDecoderModel.from_encoder_decoder_pretrained(
"monsoon-nlp/ar-seq2seq-gender-encoder",
"monsoon-nlp/ar-seq2seq-gender-decoder",
min_length=40
)
tokenizer = AutoTokenizer.from_pretrained('monsoon-nlp/ar-seq2seq-gender-decoder') # same as MARBERT original
input_ids = torch.tensor(tokenizer.encode("أنا سعيدة")).unsqueeze(0)
generated = model.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id)
tokenizer.decode(generated.tolist()[0][1 : len(input_ids[0]) - 1])
> 'انا سعيد'
```
https://colab.research.google.com/drive/1S0kE_2WiV82JkqKik_sBW-0TUtzUVmrV?usp=sharing
## Training
I originally developed
<a href="https://github.com/MonsoonNLP/el-la">a gender flip Python script</a>
for Spanish sentences, using
<a href="https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased">BETO</a>,
and spaCy. More about this project: https://medium.com/ai-in-plain-english/gender-bias-in-spanish-bert-1f4d76780617
The Arabic model encoder and decoder started with weights and vocabulary from
<a href="https://github.com/UBC-NLP/marbert">MARBERT from UBC-NLP</a>,
and was trained on the
<a href="https://camel.abudhabi.nyu.edu/arabic-parallel-gender-corpus/">Arabic Parallel Gender Corpus</a>
from NYU Abu Dhabi. The text is first-person sentences from OpenSubtitles, with parallel
gender-reinflected sentences generated by Arabic speakers.
Training notebook: https://colab.research.google.com/drive/1TuDfnV2gQ-WsDtHkF52jbn699bk6vJZV
## Non-binary gender
This model is useful to generate male and female text samples, but falls
short of capturing gender diversity in the world and in the Arabic
language. This subject is discussed in the bias statement of the
<a href="https://www.aclweb.org/anthology/2020.gebnlp-1.12/">Gender Reinflection paper</a>.
|
dkqjrm/20230903070300
|
dkqjrm
| 2023-09-03T03:15:11Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-02T22:03:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230903070300'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230903070300
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8203
- Accuracy: 0.6599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 340 | 0.7251 | 0.5063 |
| 0.7449 | 2.0 | 680 | 0.7348 | 0.5 |
| 0.7388 | 3.0 | 1020 | 0.7304 | 0.5 |
| 0.7388 | 4.0 | 1360 | 0.7639 | 0.5 |
| 0.7384 | 5.0 | 1700 | 0.7316 | 0.5 |
| 0.7376 | 6.0 | 2040 | 0.7268 | 0.5 |
| 0.7376 | 7.0 | 2380 | 0.7263 | 0.5 |
| 0.7328 | 8.0 | 2720 | 0.7333 | 0.5 |
| 0.7266 | 9.0 | 3060 | 0.7533 | 0.5 |
| 0.7266 | 10.0 | 3400 | 0.7247 | 0.4984 |
| 0.7293 | 11.0 | 3740 | 0.7290 | 0.5172 |
| 0.7248 | 12.0 | 4080 | 0.7539 | 0.5 |
| 0.7248 | 13.0 | 4420 | 0.7395 | 0.5 |
| 0.7255 | 14.0 | 4760 | 0.7360 | 0.5031 |
| 0.7271 | 15.0 | 5100 | 0.7278 | 0.5 |
| 0.7271 | 16.0 | 5440 | 0.7314 | 0.5094 |
| 0.7265 | 17.0 | 5780 | 0.7417 | 0.4984 |
| 0.724 | 18.0 | 6120 | 0.7263 | 0.5 |
| 0.724 | 19.0 | 6460 | 0.7272 | 0.5031 |
| 0.723 | 20.0 | 6800 | 0.7283 | 0.5172 |
| 0.7254 | 21.0 | 7140 | 0.7284 | 0.5047 |
| 0.7254 | 22.0 | 7480 | 0.7346 | 0.4984 |
| 0.7254 | 23.0 | 7820 | 0.7295 | 0.5125 |
| 0.7259 | 24.0 | 8160 | 0.7322 | 0.5047 |
| 0.7235 | 25.0 | 8500 | 0.7327 | 0.5172 |
| 0.7235 | 26.0 | 8840 | 0.7300 | 0.5172 |
| 0.7241 | 27.0 | 9180 | 0.7345 | 0.5016 |
| 0.7227 | 28.0 | 9520 | 0.7263 | 0.5172 |
| 0.7227 | 29.0 | 9860 | 0.7341 | 0.5016 |
| 0.7212 | 30.0 | 10200 | 0.7302 | 0.5125 |
| 0.7226 | 31.0 | 10540 | 0.7346 | 0.5078 |
| 0.7226 | 32.0 | 10880 | 0.7606 | 0.4702 |
| 0.7195 | 33.0 | 11220 | 0.7357 | 0.5063 |
| 0.7226 | 34.0 | 11560 | 0.7356 | 0.5031 |
| 0.7226 | 35.0 | 11900 | 0.7397 | 0.5063 |
| 0.7224 | 36.0 | 12240 | 0.7340 | 0.5157 |
| 0.7216 | 37.0 | 12580 | 0.7319 | 0.5047 |
| 0.7216 | 38.0 | 12920 | 0.7298 | 0.5141 |
| 0.7225 | 39.0 | 13260 | 0.7438 | 0.5016 |
| 0.7197 | 40.0 | 13600 | 0.7306 | 0.5047 |
| 0.7197 | 41.0 | 13940 | 0.7279 | 0.5125 |
| 0.7206 | 42.0 | 14280 | 0.7181 | 0.5502 |
| 0.7079 | 43.0 | 14620 | 0.7566 | 0.5862 |
| 0.7079 | 44.0 | 14960 | 0.7480 | 0.6254 |
| 0.6794 | 45.0 | 15300 | 0.6922 | 0.6630 |
| 0.6556 | 46.0 | 15640 | 0.7232 | 0.6223 |
| 0.6556 | 47.0 | 15980 | 0.6961 | 0.6458 |
| 0.6438 | 48.0 | 16320 | 0.7193 | 0.6458 |
| 0.6249 | 49.0 | 16660 | 0.6663 | 0.6693 |
| 0.6117 | 50.0 | 17000 | 0.8045 | 0.6191 |
| 0.6117 | 51.0 | 17340 | 0.6984 | 0.6630 |
| 0.5961 | 52.0 | 17680 | 0.6973 | 0.6646 |
| 0.5831 | 53.0 | 18020 | 0.7606 | 0.6348 |
| 0.5831 | 54.0 | 18360 | 0.7159 | 0.6614 |
| 0.5624 | 55.0 | 18700 | 0.7947 | 0.6426 |
| 0.558 | 56.0 | 19040 | 0.8629 | 0.6238 |
| 0.558 | 57.0 | 19380 | 0.7299 | 0.6646 |
| 0.5461 | 58.0 | 19720 | 0.7642 | 0.6411 |
| 0.5322 | 59.0 | 20060 | 0.7357 | 0.6661 |
| 0.5322 | 60.0 | 20400 | 0.8926 | 0.6191 |
| 0.5253 | 61.0 | 20740 | 0.7845 | 0.6348 |
| 0.5193 | 62.0 | 21080 | 0.7580 | 0.6614 |
| 0.5193 | 63.0 | 21420 | 0.7705 | 0.6505 |
| 0.5169 | 64.0 | 21760 | 0.8464 | 0.6458 |
| 0.5021 | 65.0 | 22100 | 0.8002 | 0.6536 |
| 0.5021 | 66.0 | 22440 | 0.7595 | 0.6677 |
| 0.487 | 67.0 | 22780 | 0.7971 | 0.6458 |
| 0.4977 | 68.0 | 23120 | 0.8245 | 0.6270 |
| 0.4977 | 69.0 | 23460 | 0.8225 | 0.6379 |
| 0.4822 | 70.0 | 23800 | 0.8323 | 0.6364 |
| 0.4802 | 71.0 | 24140 | 0.8205 | 0.6364 |
| 0.4802 | 72.0 | 24480 | 0.8086 | 0.6520 |
| 0.4779 | 73.0 | 24820 | 0.7994 | 0.6567 |
| 0.4801 | 74.0 | 25160 | 0.8206 | 0.6520 |
| 0.4706 | 75.0 | 25500 | 0.8035 | 0.6442 |
| 0.4706 | 76.0 | 25840 | 0.8213 | 0.6364 |
| 0.4738 | 77.0 | 26180 | 0.8128 | 0.6630 |
| 0.4687 | 78.0 | 26520 | 0.8068 | 0.6567 |
| 0.4687 | 79.0 | 26860 | 0.8098 | 0.6630 |
| 0.4598 | 80.0 | 27200 | 0.8203 | 0.6599 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_syl_noforce_add_inpde__0015
|
bigmorning
| 2023-09-03T02:59:03Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_syl_noforce__0060",
"base_model:finetune:bigmorning/whisper_syl_noforce__0060",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-03T02:58:55Z |
---
license: apache-2.0
base_model: bigmorning/whisper_syl_noforce__0060
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_noforce_add_inpde__0015
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_noforce_add_inpde__0015
This model is a fine-tuned version of [bigmorning/whisper_syl_noforce__0060](https://huggingface.co/bigmorning/whisper_syl_noforce__0060) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4618
- Train Accuracy: 0.0319
- Train Wermet: 0.1102
- Validation Loss: 1.0659
- Validation Accuracy: 0.0212
- Validation Wermet: 0.2974
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 3.0144 | 0.0185 | 0.9684 | 1.4362 | 0.0191 | 0.3870 | 0 |
| 1.6269 | 0.0241 | 0.2797 | 1.2846 | 0.0197 | 0.3593 | 1 |
| 1.3645 | 0.0256 | 0.2469 | 1.1967 | 0.0201 | 0.3481 | 2 |
| 1.2336 | 0.0263 | 0.2264 | 1.1602 | 0.0204 | 0.3390 | 3 |
| 1.0973 | 0.0272 | 0.2091 | 1.1211 | 0.0206 | 0.3296 | 4 |
| 0.9914 | 0.0279 | 0.1941 | 1.1412 | 0.0204 | 0.3209 | 5 |
| 0.9050 | 0.0284 | 0.1819 | 1.1795 | 0.0204 | 0.3281 | 6 |
| 0.8192 | 0.0291 | 0.1695 | 1.0845 | 0.0209 | 0.3149 | 7 |
| 0.7806 | 0.0293 | 0.1608 | 1.0628 | 0.0210 | 0.3099 | 8 |
| 0.7143 | 0.0298 | 0.1511 | 1.0554 | 0.0211 | 0.3069 | 9 |
| 0.6672 | 0.0302 | 0.1431 | 1.0539 | 0.0211 | 0.3046 | 10 |
| 0.6228 | 0.0305 | 0.1338 | 1.0531 | 0.0211 | 0.3038 | 11 |
| 0.5558 | 0.0311 | 0.1253 | 1.0476 | 0.0212 | 0.2997 | 12 |
| 0.5273 | 0.0314 | 0.1186 | 1.0431 | 0.0212 | 0.2991 | 13 |
| 0.4618 | 0.0319 | 0.1102 | 1.0659 | 0.0212 | 0.2974 | 14 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
bigmorning/whisper_syl_noforce_add_inpde__0005
|
bigmorning
| 2023-09-03T02:32:31Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:bigmorning/whisper_syl_noforce__0060",
"base_model:finetune:bigmorning/whisper_syl_noforce__0060",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-03T02:32:25Z |
---
license: apache-2.0
base_model: bigmorning/whisper_syl_noforce__0060
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_noforce_add_inpde__0005
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_noforce_add_inpde__0005
This model is a fine-tuned version of [bigmorning/whisper_syl_noforce__0060](https://huggingface.co/bigmorning/whisper_syl_noforce__0060) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0973
- Train Accuracy: 0.0272
- Train Wermet: 0.2091
- Validation Loss: 1.1211
- Validation Accuracy: 0.0206
- Validation Wermet: 0.3296
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 3.0144 | 0.0185 | 0.9684 | 1.4362 | 0.0191 | 0.3870 | 0 |
| 1.6269 | 0.0241 | 0.2797 | 1.2846 | 0.0197 | 0.3593 | 1 |
| 1.3645 | 0.0256 | 0.2469 | 1.1967 | 0.0201 | 0.3481 | 2 |
| 1.2336 | 0.0263 | 0.2264 | 1.1602 | 0.0204 | 0.3390 | 3 |
| 1.0973 | 0.0272 | 0.2091 | 1.1211 | 0.0206 | 0.3296 | 4 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
crumb/Ducky-MoMoe-prototype-e4-causal
|
crumb
| 2023-09-03T02:05:38Z | 145 | 4 |
transformers
|
[
"transformers",
"pytorch",
"switchgpt2",
"text-generation",
"custom_code",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-08-17T23:42:05Z |
give me access to a dgx or any >=8x{A100 | H100} so i can warm start from llama-70b and create a gpt-4 competitor please
https://twitter.com/aicrumb/status/1692965412676206778
|
The-matt/autumn-shadow-48_590
|
The-matt
| 2023-09-03T01:58:55Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-03T01:58:46Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
The-matt/autumn-shadow-48_570
|
The-matt
| 2023-09-03T01:19:09Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-03T01:19:05Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
IT20255756/deformable-detr-box-finetuned-weed-detection
|
IT20255756
| 2023-09-03T01:03:16Z | 131 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deformable_detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/deformable-detr-box-supervised",
"base_model:finetune:facebook/deformable-detr-box-supervised",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-09-02T10:05:23Z |
---
license: apache-2.0
base_model: facebook/deformable-detr-box-supervised
tags:
- generated_from_trainer
model-index:
- name: deformable-detr-box-finetuned-weed-detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deformable-detr-box-finetuned-weed-detection
This model is a fine-tuned version of [facebook/deformable-detr-box-supervised](https://huggingface.co/facebook/deformable-detr-box-supervised) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 1.13.1+cpu
- Datasets 2.14.4
- Tokenizers 0.13.3
|
adyprat/q-FrozenLake-v1-4x4-noSlippery
|
adyprat
| 2023-09-03T00:46:33Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-03T00:46:31Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="adyprat/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
The-matt/autumn-shadow-48_540
|
The-matt
| 2023-09-03T00:34:58Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-03T00:34:54Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
gmshuler95/Reinforce-CartPole-v1
|
gmshuler95
| 2023-09-03T00:34:54Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-02T22:56:02Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 474.92 +/- 43.21
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
fahmiaziz/finetune-donut-cord-v1
|
fahmiaziz
| 2023-09-02T23:55:07Z | 53 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"license:creativeml-openrail-m",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-09-02T22:03:03Z |
---
license: creativeml-openrail-m
---
|
venetis/electra-base-discriminator-finetuned-3d-sentiment
|
venetis
| 2023-09-02T23:51:46Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-01T03:42:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: electra-base-discriminator-finetuned-3d-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-discriminator-finetuned-3d-sentiment
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5887
- Accuracy: 0.7873
- Precision: 0.7897
- Recall: 0.7873
- F1: 0.7864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 6381
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.797 | 1.0 | 1595 | 0.7075 | 0.7353 | 0.7434 | 0.7353 | 0.7357 |
| 0.5329 | 2.0 | 3190 | 0.6508 | 0.7550 | 0.7646 | 0.7550 | 0.7554 |
| 0.4597 | 3.0 | 4785 | 0.5889 | 0.7702 | 0.7803 | 0.7702 | 0.7695 |
| 0.3918 | 4.0 | 6380 | 0.5887 | 0.7873 | 0.7897 | 0.7873 | 0.7864 |
| 0.3093 | 5.0 | 7975 | 0.6412 | 0.7833 | 0.7877 | 0.7833 | 0.7836 |
| 0.2144 | 6.0 | 9570 | 0.7786 | 0.7844 | 0.7900 | 0.7844 | 0.7851 |
| 0.1507 | 7.0 | 11165 | 0.8455 | 0.7853 | 0.7903 | 0.7853 | 0.7862 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.3
|
The-matt/autumn-shadow-48_530
|
The-matt
| 2023-09-02T23:48:24Z | 6 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-02T23:48:20Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
johaanm/test-planner-alpha-V6.1
|
johaanm
| 2023-09-02T23:47:47Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-02T23:47:43Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
ayameRushia/roberta-base-indonesian-1.5G-sentiment-analysis-smsa
|
ayameRushia
| 2023-09-02T23:40:48Z | 391 | 4 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"id",
"dataset:indonlp/indonlu",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
widget:
- text: Entah mengapa saya merasakan ada sesuatu yang janggal di produk ini
tags:
- generated_from_trainer
datasets:
- indonlp/indonlu
metrics:
- accuracy
model-index:
- name: roberta-base-indonesian-1.5G-sentiment-analysis-smsa
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: indonlu
type: indonlu
args: smsa
metrics:
- name: Accuracy
type: accuracy
value: 0.9261904761904762
language:
- id
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-indonesian-1.5G-sentiment-analysis-smsa
This model is a fine-tuned version of [cahya/roberta-base-indonesian-1.5G](https://huggingface.co/cahya/roberta-base-indonesian-1.5G) on the indonlu dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4294
- Accuracy: 0.9262
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6461 | 1.0 | 688 | 0.2620 | 0.9087 |
| 0.2627 | 2.0 | 1376 | 0.2291 | 0.9151 |
| 0.1784 | 3.0 | 2064 | 0.2891 | 0.9167 |
| 0.1099 | 4.0 | 2752 | 0.3317 | 0.9230 |
| 0.0857 | 5.0 | 3440 | 0.4294 | 0.9262 |
| 0.0346 | 6.0 | 4128 | 0.4759 | 0.9246 |
| 0.0221 | 7.0 | 4816 | 0.4946 | 0.9206 |
| 0.006 | 8.0 | 5504 | 0.5823 | 0.9175 |
| 0.0047 | 9.0 | 6192 | 0.5777 | 0.9159 |
| 0.004 | 10.0 | 6880 | 0.5800 | 0.9175 |
### How to use this model in Transformers Library
```python
from transformers import pipeline
pipe = pipeline(
"text-classification",
model="ayameRushia/roberta-base-indonesian-1.5G-sentiment-analysis-smsa"
)
pipe("Terima kasih atas bantuannya ya!")
```
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
dt-and-vanilla-ardt/dt-d4rl_medium_halfcheetah-0209_2300-99
|
dt-and-vanilla-ardt
| 2023-09-02T23:36:43Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"dataset:decision_transformer_gym_replay",
"endpoints_compatible",
"region:us"
] | null | 2023-09-02T23:01:50Z |
---
base_model: ''
tags:
- generated_from_trainer
datasets:
- decision_transformer_gym_replay
model-index:
- name: dt-d4rl_medium_halfcheetah-0209_2300-99
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dt-d4rl_medium_halfcheetah-0209_2300-99
This model is a fine-tuned version of [](https://huggingface.co/) on the decision_transformer_gym_replay dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.0
- Tokenizers 0.13.3
|
acdg1214/Unit4-PixelCopter-v1
|
acdg1214
| 2023-09-02T23:33:04Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-02T23:32:59Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Unit4-PixelCopter-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 54.50 +/- 40.06
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
nahuel89p/nous-hermes-llama2-13b.gguf.q4_K_M
|
nahuel89p
| 2023-09-02T23:22:40Z | 0 | 2 | null |
[
"license:mit",
"region:us"
] | null | 2023-09-02T22:10:52Z |
---
license: mit
---
This model is a direct conversion from https://huggingface.co/TheBloke/Nous-Hermes-Llama2-GGML using Llama.cpp convert-llama-ggmlv3-to-gguf.py utility script.
All the required metadata (config.json and tokenizer) was provided.
|
The-matt/autumn-shadow-48_520
|
The-matt
| 2023-09-02T23:18:33Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-02T23:18:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
sashat/whisper-sara-ar
|
sashat
| 2023-09-02T23:15:28Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ar",
"dataset:ClArTTS_N_QASR_female",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-02T21:59:41Z |
---
language:
- ar
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- ClArTTS_N_QASR_female
model-index:
- name: Whisper Small Ar - Sara
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Ar - Sara
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the CLArQasr dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.2
|
The-matt/autumn-shadow-48_510
|
The-matt
| 2023-09-02T23:06:03Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-02T23:05:59Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
CzarnyRycerz/ppo-LunarLander-v2-trained-locally
|
CzarnyRycerz
| 2023-09-02T22:55:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-02T22:38:57Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 310.89 +/- 13.59
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
The-matt/autumn-shadow-48_500
|
The-matt
| 2023-09-02T22:55:15Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-02T22:55:11Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
gmshuler95/q-Taxi-v3
|
gmshuler95
| 2023-09-02T22:45:53Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-02T22:45:50Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="gmshuler95/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
dt-and-vanilla-ardt/dt-d4rl_medium_walker2d-0209_2209-66
|
dt-and-vanilla-ardt
| 2023-09-02T22:45:26Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"dataset:decision_transformer_gym_replay",
"endpoints_compatible",
"region:us"
] | null | 2023-09-02T22:11:15Z |
---
base_model: ''
tags:
- generated_from_trainer
datasets:
- decision_transformer_gym_replay
model-index:
- name: dt-d4rl_medium_walker2d-0209_2209-66
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dt-d4rl_medium_walker2d-0209_2209-66
This model is a fine-tuned version of [](https://huggingface.co/) on the decision_transformer_gym_replay dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.0
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_p_tuning_500_4_50000_6_e3_s6789_v4_l4_v100
|
KingKazma
| 2023-09-02T22:19:30Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-17T22:01:17Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
dt-and-vanilla-ardt/dt-d4rl_medium_hopper-0209_2150-66
|
dt-and-vanilla-ardt
| 2023-09-02T22:10:03Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"dataset:decision_transformer_gym_replay",
"endpoints_compatible",
"region:us"
] | null | 2023-09-02T21:51:10Z |
---
base_model: ''
tags:
- generated_from_trainer
datasets:
- decision_transformer_gym_replay
model-index:
- name: dt-d4rl_medium_hopper-0209_2150-66
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dt-d4rl_medium_hopper-0209_2150-66
This model is a fine-tuned version of [](https://huggingface.co/) on the decision_transformer_gym_replay dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.0
- Tokenizers 0.13.3
|
dt-and-vanilla-ardt/dt-d4rl_medium_walker2d-0209_2131-33
|
dt-and-vanilla-ardt
| 2023-09-02T22:09:48Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"dataset:decision_transformer_gym_replay",
"endpoints_compatible",
"region:us"
] | null | 2023-09-02T21:32:19Z |
---
base_model: ''
tags:
- generated_from_trainer
datasets:
- decision_transformer_gym_replay
model-index:
- name: dt-d4rl_medium_walker2d-0209_2131-33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dt-d4rl_medium_walker2d-0209_2131-33
This model is a fine-tuned version of [](https://huggingface.co/) on the decision_transformer_gym_replay dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.0
- Tokenizers 0.13.3
|
Jonathancasjar/Retail_Shelves
|
Jonathancasjar
| 2023-09-02T22:05:00Z | 3 | 5 |
transformers
|
[
"transformers",
"bestv2.pt",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-07-26T23:56:01Z |
---
license: apache-2.0
---
<div style="text-align:center;">
<img style="margin: 0 auto;" width="700" src="https://huggingface.co/Jonathancasjar/Retail_Shelves/resolve/main/test_images/image.png"/>
</div>
- Install yolov5:
```bash
pip install yolov5==7.0.5
```
- Set image
```bash
wget -O 'image.jpg' 'https://images.unsplash.com/photo-1556767576-cf0a4a80e5b8?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxzZWFyY2h8NXx8c3VwZXJtYXJrZXQlMjBzaGVsdmVzfGVufDB8fDB8fHww&w=1000&q=80'
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('Jonathancasjar/Retail_Shelves')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
# set an image
img = '/content/image.jpg'
# perform inference
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
|
The-matt/autumn-shadow-48_470
|
The-matt
| 2023-09-02T22:04:07Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-02T22:04:03Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
actionpace/limarp-13b-merged
|
actionpace
| 2023-09-02T21:55:51Z | 5 | 0 | null |
[
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-09-01T18:43:20Z |
---
license: other
language:
- en
---
Some of my own quants:
* limarp-13b-merged_Q5_1.gguf
* limarp-13b-merged_Q5_1_4K.gguf
* limarp-13b-merged_Q5_1_8K.gguf
Original Model: [limarp-13b-merged](https://huggingface.co/Oniichat/limarp-13b-merged)
|
monsoon-nlp/mGPT-13B-quantized
|
monsoon-nlp
| 2023-09-02T21:47:28Z | 16 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"multilingual",
"ar",
"hi",
"id",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2023-09-01T06:04:43Z |
---
license: apache-2.0
language:
- ar
- hi
- id
pipeline_tag: text-generation
tags:
- multilingual
widget:
- text: 'في مدرستي السابقة'
example_title: Arabic prompt
- text: 'आप समुद्री लुटेरों के बारे में क्या जानते हैं?'
example_title: Hindi prompt
- text: 'Kucing saya suka'
example_title: Indonesian prompt
---
# mGPT-quantized
The concept: 8-bit quantized version of [mGPT-13B](https://huggingface.co/ai-forever/mGPT-13B), an LLM released by AI-Forever / Sberbank AI in 2022-2023.
On the GPT scale, it is between the # of parameters for GPT-2 and GPT-3, but comparison is tricky after training on 60+ languages.
My goal is to evaluate this on Hindi and Indonesian tasks, where there are fewer autoregressive language models in this size range.
For English: use a GPT model or LLaMa2-7B
For Arabic: in August 2023 I would recommend the bilingual [JAIS model](https://huggingface.co/inception-mbzuai/jais-13b), which is also 13B parameters can be quantized.
In August 2023 AI-Forever added 1.3B-param models for 20+ languages. If your language is Mongolian, for example, it might be better to use mGPT-1.3B-mongol and not this one.
They also have a 1.3B param model for all languages, which I further quantized here: https://huggingface.co/monsoon-nlp/mGPT-quantized
## How was the model created?
Quantization of mGPT-13B was done using `bitsandbytes` library, CoLab Pro with an A100 GPU, and a lot of space on Google Drive.
```python
from transformers import BitsAndBytesConfig, GPT2LMHeadModel
quantization_config = BitsAndBytesConfig(
load_in_8bit=True,
bnb_8bit_compute_dtype=torch.bfloat16,
bnb_8bit_use_double_quant=True,
bnb_8bit_quant_type="nf4",
)
qmodel = GPT2LMHeadModel.from_pretrained(
"ai-forever/mGPT-13B",
load_in_8bit=True,
torch_dtype=torch.bfloat16,
quantization_config=quantization_config,
device_map="auto"
)
qmodel.save_pretrained("model_name")
```
## Future steps
- mGPT could be further quantized (4-bit), but `model.save_pretrained()` currently throws a `NotImplementedError` error.
|
actionpace/UndiMix-v2-13b
|
actionpace
| 2023-09-02T21:31:34Z | 1 | 0 | null |
[
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-09-02T21:12:54Z |
---
license: other
language:
- en
---
Some of my own quants:
* UndiMix-v2-13b_Q5_1_4K.gguf
* UndiMix-v2-13b_Q5_1_8K.gguf
Original Model: [UndiMix-v2-13b](https://huggingface.co/Undi95/UndiMix-v2-13b)
|
KingKazma/xsum_t5-small_p_tuning_500_3_50000_8_e3_s6789_v4_l4_v100
|
KingKazma
| 2023-09-02T21:20:15Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-02T21:20:14Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
The-matt/autumn-shadow-48_430
|
The-matt
| 2023-09-02T21:11:20Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-02T21:11:14Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
ZukoVZA/Morfonica
|
ZukoVZA
| 2023-09-02T20:57:47Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-04-23T22:07:39Z |
---
license: openrail
---
liuwei : Rui
qinshen : Nanami
touzi : Touko
zenbai : Mashiro
zuzhi : Futaba
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.