modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 06:31:37
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 06:31:07
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
weslleylima/my-awesome-setfit-model
|
weslleylima
| 2023-06-28T13:27:42Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-28T12:33:07Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# weslleylima/my-awesome-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("weslleylima/my-awesome-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
sleepynlp/q-Taxi-v3-v1-leo
|
sleepynlp
| 2023-06-28T13:26:47Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T13:26:43Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-v1-leo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="sleepynlp/q-Taxi-v3-v1-leo", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sleepynlp/q-FrozenLake-v1-4x4-noSlippery-leo
|
sleepynlp
| 2023-06-28T13:20:09Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T13:20:05Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery-leo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="sleepynlp/q-FrozenLake-v1-4x4-noSlippery-leo", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Shrishml/dolly_lora3b
|
Shrishml
| 2023-06-28T13:09:28Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-28T07:01:54Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
hassansoliman/falcon-40b-qlora-utterance-adaptations_v6
|
hassansoliman
| 2023-06-28T12:55:21Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-27T12:07:40Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
catrabbitbear/pyramids2
|
catrabbitbear
| 2023-06-28T12:51:21Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-28T12:51:20Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: catrabbitbear/pyramids2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jensvw/q-FrozenLake-v1-4x4-noSlippery
|
jensvw
| 2023-06-28T12:38:03Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T12:37:59Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jensvw/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Michailo1324/TI-mobilenetv3-imagenet-v2
|
Michailo1324
| 2023-06-28T12:28:36Z | 4 | 0 |
tf-keras
|
[
"tf-keras",
"mobilenet",
"image-classification",
"region:us"
] |
image-classification
| 2023-05-15T10:46:21Z |
---
pipeline_tag: image-classification
---
|
Deepamparmar/Bloom3B-LORA
|
Deepamparmar
| 2023-06-28T12:25:46Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-28T12:25:41Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Raizel123/Vbabylora
|
Raizel123
| 2023-06-28T12:25:37Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-28T12:22:00Z |
---
license: creativeml-openrail-m
---
|
QuangHuy54/long-t5-tglobal-base-google-multimedia
|
QuangHuy54
| 2023-06-28T12:09:44Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"longt5",
"text2text-generation",
"generated_from_trainer",
"dataset:multi_news",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-28T04:49:36Z |
---
tags:
- generated_from_trainer
datasets:
- multi_news
metrics:
- rouge
model-index:
- name: long-t5-tglobal-base-google-multimedia
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: multi_news
type: multi_news
config: default
split: train[15000:20000]
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1004
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# long-t5-tglobal-base-google-multimedia
This model is a fine-tuned version of [QuangHuy54/long-t5-tglobal-base-google-multimedia](https://huggingface.co/QuangHuy54/long-t5-tglobal-base-google-multimedia) on the multi_news dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9936
- Rouge1: 0.1004
- Rouge2: 0.0347
- Rougel: 0.078
- Rougelsum: 0.078
- Gen Len: 18.995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.383 | 1.0 | 3000 | 1.9936 | 0.1004 | 0.0347 | 0.078 | 0.078 | 18.995 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
gurjotsaggu/Gurjot
|
gurjotsaggu
| 2023-06-28T12:09:05Z | 0 | 0 | null |
[
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-06-28T12:09:05Z |
---
license: bigcode-openrail-m
---
|
Narsil/amall-7b
|
Narsil
| 2023-06-28T12:05:23Z | 30 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-26T08:11:43Z |
---
license: other
duplicated_from: huggingface/llama-7b
---
|
mahsa-gh/my_awesome_model
|
mahsa-gh
| 2023-06-28T11:55:49Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-27T04:12:07Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: mahsa-gh/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mahsa-gh/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0625
- Validation Loss: 0.2113
- Train Accuracy: 0.9318
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2532 | 0.1893 | 0.9288 | 0 |
| 0.1336 | 0.1840 | 0.9321 | 1 |
| 0.0625 | 0.2113 | 0.9318 | 2 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.11.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
czz23/SplitStatement-setfit-model
|
czz23
| 2023-06-28T11:47:11Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"albert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-28T11:47:08Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# czz23/SplitStatement-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("czz23/SplitStatement-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
BerserkerMother/Reinforce-Pixelcopter-PLE-v0
|
BerserkerMother
| 2023-06-28T11:45:36Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T11:45:35Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 11.90 +/- 8.77
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
YakovElm/Hyperledger_15_BERT_Over_Sampling
|
YakovElm
| 2023-06-28T11:44:13Z | 58 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T11:43:37Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger_15_BERT_Over_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger_15_BERT_Over_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0278
- Train Accuracy: 0.9920
- Validation Loss: 0.8934
- Validation Accuracy: 0.8278
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4883 | 0.7456 | 0.5933 | 0.7749 | 0 |
| 0.1029 | 0.9652 | 0.7254 | 0.8683 | 1 |
| 0.0278 | 0.9920 | 0.8934 | 0.8278 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
YakovElm/Qt_15_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T11:35:53Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T11:35:18Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt_15_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt_15_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0002
- Train Accuracy: 1.0
- Validation Loss: 0.4444
- Validation Accuracy: 0.9505
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0261 | 0.9997 | 0.3635 | 0.9505 | 0 |
| 0.0005 | 1.0 | 0.4156 | 0.9505 | 1 |
| 0.0002 | 1.0 | 0.4444 | 0.9505 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hztang/t5-small-base-custom
|
hztang
| 2023-06-28T11:18:41Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-28T09:29:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-large-base-custom
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-large-base-custom
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 238 | 1.0463 | 25.3323 | 19.1916 | 23.8957 | 24.792 | 19.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.12.1
- Datasets 2.13.1
- Tokenizers 0.11.0
|
NickyNicky/MPT-7b-instruct-QLora-8Bits-Peft-train_eli5-1_Epoch-V8
|
NickyNicky
| 2023-06-28T11:09:07Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-28T11:09:03Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
NickyNicky/MPT-7b-instruct-QLora-8Bits-Peft-train_eli5-1_Epoch-V6
|
NickyNicky
| 2023-06-28T10:35:33Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-28T10:35:29Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
mnavas/bert-finetuned-token-reqsolvgencat
|
mnavas
| 2023-06-28T10:34:36Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-28T09:43:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: beto-finetuned-token-reqsolvgencat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beto-finetuned-token-reqsolvgencat
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0573
- Precision: 0.1111
- Recall: 0.5
- F1: 0.1818
- Accuracy: 0.7765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.8551 | 1.0 | 3 | 1.1478 | 0.0 | 0.0 | 0.0 | 0.4696 |
| 1.4354 | 2.0 | 6 | 0.7403 | 0.0 | 0.0 | 0.0 | 0.6088 |
| 0.6532 | 3.0 | 9 | 0.6900 | 0.0 | 0.0 | 0.0 | 0.6010 |
| 0.406 | 4.0 | 12 | 2.0318 | 0.0 | 0.0 | 0.0 | 0.5284 |
| 1.3514 | 5.0 | 15 | 1.3364 | 0.0 | 0.0 | 0.0 | 0.6186 |
| 0.419 | 6.0 | 18 | 0.9463 | 0.0 | 0.0 | 0.0 | 0.6 |
| 0.4816 | 7.0 | 21 | 0.9409 | 0.0 | 0.0 | 0.0 | 0.6314 |
| 0.8621 | 8.0 | 24 | 0.9034 | 0.0 | 0.0 | 0.0 | 0.7010 |
| 0.2981 | 9.0 | 27 | 0.9617 | 0.0 | 0.0 | 0.0 | 0.6039 |
| 0.5882 | 10.0 | 30 | 0.5289 | 0.0 | 0.0 | 0.0 | 0.8 |
| 0.4327 | 11.0 | 33 | 0.6573 | 0.2 | 0.5 | 0.2857 | 0.7882 |
| 0.2076 | 12.0 | 36 | 0.9207 | 0.0 | 0.0 | 0.0 | 0.6784 |
| 0.1454 | 13.0 | 39 | 0.7729 | 0.0 | 0.0 | 0.0 | 0.7882 |
| 0.2339 | 14.0 | 42 | 0.9067 | 0.0 | 0.0 | 0.0 | 0.7608 |
| 0.2281 | 15.0 | 45 | 0.4732 | 0.0 | 0.0 | 0.0 | 0.7892 |
| 0.1058 | 16.0 | 48 | 0.8219 | 0.0667 | 0.5 | 0.1176 | 0.7961 |
| 0.0577 | 17.0 | 51 | 0.9698 | 0.1429 | 0.5 | 0.2222 | 0.8147 |
| 0.0354 | 18.0 | 54 | 1.3158 | 0.0714 | 0.5 | 0.125 | 0.7549 |
| 0.0258 | 19.0 | 57 | 2.2414 | 0.1667 | 0.5 | 0.25 | 0.6941 |
| 0.0456 | 20.0 | 60 | 1.2596 | 0.0667 | 0.5 | 0.1176 | 0.7520 |
| 0.0339 | 21.0 | 63 | 1.8941 | 0.1667 | 0.5 | 0.25 | 0.7069 |
| 0.02 | 22.0 | 66 | 1.0201 | 0.1 | 0.5 | 0.1667 | 0.7941 |
| 0.0257 | 23.0 | 69 | 1.4716 | 0.0909 | 0.5 | 0.1538 | 0.7716 |
| 0.0148 | 24.0 | 72 | 2.6551 | 0.0 | 0.0 | 0.0 | 0.6853 |
| 0.0257 | 25.0 | 75 | 1.7904 | 0.125 | 0.5 | 0.2 | 0.7324 |
| 0.0183 | 26.0 | 78 | 2.2579 | 0.2 | 0.5 | 0.2857 | 0.7078 |
| 0.0218 | 27.0 | 81 | 1.3332 | 0.1667 | 0.5 | 0.25 | 0.7137 |
| 0.0148 | 28.0 | 84 | 0.7506 | 0.0769 | 0.5 | 0.1333 | 0.8696 |
| 0.0081 | 29.0 | 87 | 2.2165 | 0.1667 | 0.5 | 0.25 | 0.7059 |
| 0.0098 | 30.0 | 90 | 2.5291 | 0.2 | 0.5 | 0.2857 | 0.7 |
| 0.0072 | 31.0 | 93 | 2.5877 | 0.25 | 0.5 | 0.3333 | 0.6990 |
| 0.0162 | 32.0 | 96 | 2.7463 | 0.1667 | 0.5 | 0.25 | 0.6941 |
| 0.0073 | 33.0 | 99 | 2.2100 | 0.1429 | 0.5 | 0.2222 | 0.7176 |
| 0.0057 | 34.0 | 102 | 1.5509 | 0.0769 | 0.5 | 0.1333 | 0.7814 |
| 0.0077 | 35.0 | 105 | 1.8449 | 0.1111 | 0.5 | 0.1818 | 0.7578 |
| 0.0071 | 36.0 | 108 | 2.3808 | 0.3333 | 0.5 | 0.4 | 0.7127 |
| 0.0063 | 37.0 | 111 | 1.1188 | 0.125 | 0.5 | 0.2 | 0.8598 |
| 0.0067 | 38.0 | 114 | 1.7108 | 0.0 | 0.0 | 0.0 | 0.7892 |
| 0.0063 | 39.0 | 117 | 1.8995 | 0.1 | 0.5 | 0.1667 | 0.7363 |
| 0.0086 | 40.0 | 120 | 2.0472 | 0.1667 | 0.5 | 0.25 | 0.7137 |
| 0.0028 | 41.0 | 123 | 1.8879 | 0.1111 | 0.5 | 0.1818 | 0.7245 |
| 0.0084 | 42.0 | 126 | 1.9545 | 0.1667 | 0.5 | 0.25 | 0.7157 |
| 0.0018 | 43.0 | 129 | 2.5274 | 0.0 | 0.0 | 0.0 | 0.6922 |
| 0.0031 | 44.0 | 132 | 2.0330 | 0.0909 | 0.5 | 0.1538 | 0.7245 |
| 0.0034 | 45.0 | 135 | 0.9381 | 0.1 | 0.5 | 0.1667 | 0.8569 |
| 0.0016 | 46.0 | 138 | 0.6307 | 0.1111 | 0.5 | 0.1818 | 0.8980 |
| 0.0011 | 47.0 | 141 | 0.7823 | 0.0667 | 0.5 | 0.1176 | 0.8676 |
| 0.0021 | 48.0 | 144 | 1.1034 | 0.0909 | 0.5 | 0.1538 | 0.8373 |
| 0.0025 | 49.0 | 147 | 1.4482 | 0.2 | 0.5 | 0.2857 | 0.8108 |
| 0.0023 | 50.0 | 150 | 1.5541 | 0.1667 | 0.5 | 0.25 | 0.8069 |
| 0.0007 | 51.0 | 153 | 1.7164 | 0.1667 | 0.5 | 0.25 | 0.8010 |
| 0.0002 | 52.0 | 156 | 1.8291 | 0.1667 | 0.5 | 0.25 | 0.7941 |
| 0.0002 | 53.0 | 159 | 1.8918 | 0.1429 | 0.5 | 0.2222 | 0.7892 |
| 0.0016 | 54.0 | 162 | 1.8814 | 0.2 | 0.5 | 0.2857 | 0.7961 |
| 0.0013 | 55.0 | 165 | 1.8920 | 0.1 | 0.5 | 0.1667 | 0.7873 |
| 0.0009 | 56.0 | 168 | 1.8802 | 0.1 | 0.5 | 0.1667 | 0.7843 |
| 0.0017 | 57.0 | 171 | 1.7839 | 0.1111 | 0.5 | 0.1818 | 0.8 |
| 0.0028 | 58.0 | 174 | 1.7409 | 0.1 | 0.5 | 0.1667 | 0.7961 |
| 0.0029 | 59.0 | 177 | 2.1381 | 0.1 | 0.5 | 0.1667 | 0.7196 |
| 0.0008 | 60.0 | 180 | 2.4554 | 0.125 | 0.5 | 0.2 | 0.6922 |
| 0.0012 | 61.0 | 183 | 2.7071 | 0.125 | 0.5 | 0.2 | 0.6676 |
| 0.0011 | 62.0 | 186 | 2.5624 | 0.125 | 0.5 | 0.2 | 0.6814 |
| 0.0009 | 63.0 | 189 | 2.1029 | 0.2 | 0.5 | 0.2857 | 0.7363 |
| 0.0011 | 64.0 | 192 | 1.5984 | 0.125 | 0.5 | 0.2 | 0.7804 |
| 0.0005 | 65.0 | 195 | 1.4155 | 0.0909 | 0.5 | 0.1538 | 0.8088 |
| 0.001 | 66.0 | 198 | 1.3276 | 0.1111 | 0.5 | 0.1818 | 0.8225 |
| 0.0002 | 67.0 | 201 | 1.2844 | 0.1429 | 0.5 | 0.2222 | 0.8255 |
| 0.0076 | 68.0 | 204 | 1.2601 | 0.1667 | 0.5 | 0.25 | 0.8275 |
| 0.0003 | 69.0 | 207 | 1.3359 | 0.1429 | 0.5 | 0.2222 | 0.8343 |
| 0.0027 | 70.0 | 210 | 1.2452 | 0.25 | 0.5 | 0.3333 | 0.8363 |
| 0.004 | 71.0 | 213 | 1.7228 | 0.1 | 0.5 | 0.1667 | 0.7775 |
| 0.0004 | 72.0 | 216 | 2.7878 | 0.5 | 0.5 | 0.5 | 0.6804 |
| 1.1257 | 73.0 | 219 | 2.5785 | 0.5 | 0.5 | 0.5 | 0.6804 |
| 0.0209 | 74.0 | 222 | 0.1081 | 0.1111 | 0.5 | 0.1818 | 0.9775 |
| 0.0486 | 75.0 | 225 | 1.5039 | 0.0667 | 0.5 | 0.1176 | 0.7784 |
| 0.0012 | 76.0 | 228 | 3.2975 | 0.0 | 0.0 | 0.0 | 0.6098 |
| 0.0929 | 77.0 | 231 | 2.4796 | 0.3333 | 0.5 | 0.4 | 0.6618 |
| 0.0006 | 78.0 | 234 | 1.6117 | 0.125 | 0.5 | 0.2 | 0.7951 |
| 0.0077 | 79.0 | 237 | 1.8651 | 0.125 | 0.5 | 0.2 | 0.7127 |
| 0.0032 | 80.0 | 240 | 2.0187 | 0.1667 | 0.5 | 0.25 | 0.6873 |
| 0.0007 | 81.0 | 243 | 1.9540 | 0.1 | 0.5 | 0.1667 | 0.6902 |
| 0.0011 | 82.0 | 246 | 1.8161 | 0.0714 | 0.5 | 0.125 | 0.7275 |
| 0.0009 | 83.0 | 249 | 1.7688 | 0.1111 | 0.5 | 0.1818 | 0.7657 |
| 0.0006 | 84.0 | 252 | 1.7025 | 0.1667 | 0.5 | 0.25 | 0.7912 |
| 0.0003 | 85.0 | 255 | 1.6924 | 0.2 | 0.5 | 0.2857 | 0.7980 |
| 0.0004 | 86.0 | 258 | 1.7019 | 0.1667 | 0.5 | 0.25 | 0.7980 |
| 0.0008 | 87.0 | 261 | 1.7386 | 0.2 | 0.5 | 0.2857 | 0.7971 |
| 0.0002 | 88.0 | 264 | 1.7882 | 0.1667 | 0.5 | 0.25 | 0.7961 |
| 0.0001 | 89.0 | 267 | 1.8405 | 0.0833 | 0.5 | 0.1429 | 0.7833 |
| 0.0004 | 90.0 | 270 | 1.8319 | 0.125 | 0.5 | 0.2 | 0.7931 |
| 0.0003 | 91.0 | 273 | 1.8520 | 0.125 | 0.5 | 0.2 | 0.7931 |
| 0.0001 | 92.0 | 276 | 1.8876 | 0.0909 | 0.5 | 0.1538 | 0.7873 |
| 0.0001 | 93.0 | 279 | 1.9220 | 0.1 | 0.5 | 0.1667 | 0.7814 |
| 0.0004 | 94.0 | 282 | 2.0870 | 0.1 | 0.5 | 0.1667 | 0.7333 |
| 0.0004 | 95.0 | 285 | 2.1469 | 0.1111 | 0.5 | 0.1818 | 0.7225 |
| 0.0009 | 96.0 | 288 | 2.1726 | 0.1429 | 0.5 | 0.2222 | 0.7196 |
| 0.0007 | 97.0 | 291 | 1.9247 | 0.1 | 0.5 | 0.1667 | 0.7667 |
| 0.0012 | 98.0 | 294 | 1.8172 | 0.1111 | 0.5 | 0.1818 | 0.7902 |
| 0.0002 | 99.0 | 297 | 1.7789 | 0.1 | 0.5 | 0.1667 | 0.7980 |
| 0.0109 | 100.0 | 300 | 1.8386 | 0.0909 | 0.5 | 0.1538 | 0.7843 |
| 0.0008 | 101.0 | 303 | 2.0800 | 0.1 | 0.5 | 0.1667 | 0.7314 |
| 0.0002 | 102.0 | 306 | 2.5253 | 0.3333 | 0.5 | 0.4 | 0.6804 |
| 0.001 | 103.0 | 309 | 2.2149 | 0.1 | 0.5 | 0.1667 | 0.7118 |
| 0.0001 | 104.0 | 312 | 1.8607 | 0.1 | 0.5 | 0.1667 | 0.8020 |
| 0.0002 | 105.0 | 315 | 1.7786 | 0.1 | 0.5 | 0.1667 | 0.8127 |
| 0.0009 | 106.0 | 318 | 1.8432 | 0.125 | 0.5 | 0.2 | 0.8078 |
| 0.0001 | 107.0 | 321 | 2.0339 | 0.1111 | 0.5 | 0.1818 | 0.7784 |
| 0.0001 | 108.0 | 324 | 2.2628 | 0.0769 | 0.5 | 0.1333 | 0.7392 |
| 0.0007 | 109.0 | 327 | 2.3649 | 0.0714 | 0.5 | 0.125 | 0.7265 |
| 0.0001 | 110.0 | 330 | 2.4330 | 0.0714 | 0.5 | 0.125 | 0.7137 |
| 0.0001 | 111.0 | 333 | 2.4918 | 0.1 | 0.5 | 0.1667 | 0.7059 |
| 0.0002 | 112.0 | 336 | 2.2853 | 0.0909 | 0.5 | 0.1538 | 0.7647 |
| 0.0001 | 113.0 | 339 | 2.1945 | 0.1 | 0.5 | 0.1667 | 0.7765 |
| 0.0001 | 114.0 | 342 | 2.1446 | 0.1111 | 0.5 | 0.1818 | 0.7824 |
| 0.0008 | 115.0 | 345 | 2.1037 | 0.1111 | 0.5 | 0.1818 | 0.7863 |
| 0.0001 | 116.0 | 348 | 2.0753 | 0.1 | 0.5 | 0.1667 | 0.7902 |
| 0.0001 | 117.0 | 351 | 2.0617 | 0.1 | 0.5 | 0.1667 | 0.7902 |
| 0.0014 | 118.0 | 354 | 1.9908 | 0.1111 | 0.5 | 0.1818 | 0.7961 |
| 0.0009 | 119.0 | 357 | 1.9134 | 0.1111 | 0.5 | 0.1818 | 0.8 |
| 0.0009 | 120.0 | 360 | 1.8651 | 0.1111 | 0.5 | 0.1818 | 0.8020 |
| 0.0002 | 121.0 | 363 | 1.8480 | 0.1111 | 0.5 | 0.1818 | 0.8 |
| 0.0001 | 122.0 | 366 | 1.8558 | 0.0909 | 0.5 | 0.1538 | 0.7961 |
| 0.0001 | 123.0 | 369 | 1.8727 | 0.1111 | 0.5 | 0.1818 | 0.7892 |
| 0.0001 | 124.0 | 372 | 1.8951 | 0.125 | 0.5 | 0.2 | 0.7863 |
| 0.0007 | 125.0 | 375 | 1.8971 | 0.1667 | 0.5 | 0.25 | 0.7833 |
| 0.0011 | 126.0 | 378 | 1.8479 | 0.1667 | 0.5 | 0.25 | 0.7814 |
| 0.0002 | 127.0 | 381 | 1.7985 | 0.1429 | 0.5 | 0.2222 | 0.7804 |
| 0.0001 | 128.0 | 384 | 1.7712 | 0.125 | 0.5 | 0.2 | 0.7794 |
| 0.0001 | 129.0 | 387 | 1.7929 | 0.1111 | 0.5 | 0.1818 | 0.7755 |
| 0.0002 | 130.0 | 390 | 1.8551 | 0.1111 | 0.5 | 0.1818 | 0.7725 |
| 0.0001 | 131.0 | 393 | 1.9030 | 0.125 | 0.5 | 0.2 | 0.7716 |
| 0.0001 | 132.0 | 396 | 1.9421 | 0.125 | 0.5 | 0.2 | 0.7686 |
| 0.0009 | 133.0 | 399 | 1.9440 | 0.125 | 0.5 | 0.2 | 0.7686 |
| 0.0001 | 134.0 | 402 | 1.9455 | 0.125 | 0.5 | 0.2 | 0.7686 |
| 0.0001 | 135.0 | 405 | 1.9527 | 0.125 | 0.5 | 0.2 | 0.7686 |
| 0.0004 | 136.0 | 408 | 1.9493 | 0.125 | 0.5 | 0.2 | 0.7706 |
| 0.0001 | 137.0 | 411 | 1.9432 | 0.125 | 0.5 | 0.2 | 0.7706 |
| 0.0001 | 138.0 | 414 | 1.9491 | 0.125 | 0.5 | 0.2 | 0.7706 |
| 0.0002 | 139.0 | 417 | 1.9434 | 0.1 | 0.5 | 0.1667 | 0.7735 |
| 0.0006 | 140.0 | 420 | 1.9363 | 0.1111 | 0.5 | 0.1818 | 0.7784 |
| 0.0001 | 141.0 | 423 | 1.9330 | 0.1111 | 0.5 | 0.1818 | 0.7804 |
| 0.0001 | 142.0 | 426 | 1.9334 | 0.1111 | 0.5 | 0.1818 | 0.7833 |
| 0.0001 | 143.0 | 429 | 1.9366 | 0.125 | 0.5 | 0.2 | 0.7843 |
| 0.0001 | 144.0 | 432 | 1.9417 | 0.1111 | 0.5 | 0.1818 | 0.7843 |
| 0.0001 | 145.0 | 435 | 1.9467 | 0.1111 | 0.5 | 0.1818 | 0.7843 |
| 0.0001 | 146.0 | 438 | 1.9518 | 0.1111 | 0.5 | 0.1818 | 0.7843 |
| 0.0001 | 147.0 | 441 | 1.9574 | 0.1111 | 0.5 | 0.1818 | 0.7853 |
| 0.0001 | 148.0 | 444 | 1.9635 | 0.1111 | 0.5 | 0.1818 | 0.7853 |
| 0.0003 | 149.0 | 447 | 1.9643 | 0.1111 | 0.5 | 0.1818 | 0.7853 |
| 0.0001 | 150.0 | 450 | 1.9664 | 0.125 | 0.5 | 0.2 | 0.7853 |
| 0.0008 | 151.0 | 453 | 1.9830 | 0.125 | 0.5 | 0.2 | 0.7843 |
| 0.0001 | 152.0 | 456 | 1.9974 | 0.125 | 0.5 | 0.2 | 0.7824 |
| 0.0001 | 153.0 | 459 | 2.0083 | 0.125 | 0.5 | 0.2 | 0.7824 |
| 0.0007 | 154.0 | 462 | 2.0066 | 0.1111 | 0.5 | 0.1818 | 0.7833 |
| 0.0001 | 155.0 | 465 | 2.0047 | 0.1111 | 0.5 | 0.1818 | 0.7833 |
| 0.0001 | 156.0 | 468 | 2.0042 | 0.1111 | 0.5 | 0.1818 | 0.7833 |
| 0.0001 | 157.0 | 471 | 2.0075 | 0.1111 | 0.5 | 0.1818 | 0.7833 |
| 0.0008 | 158.0 | 474 | 2.0217 | 0.1111 | 0.5 | 0.1818 | 0.7824 |
| 0.0001 | 159.0 | 477 | 2.0326 | 0.125 | 0.5 | 0.2 | 0.7814 |
| 0.0001 | 160.0 | 480 | 2.0422 | 0.125 | 0.5 | 0.2 | 0.7814 |
| 0.0001 | 161.0 | 483 | 2.0493 | 0.125 | 0.5 | 0.2 | 0.7804 |
| 0.0001 | 162.0 | 486 | 2.0568 | 0.125 | 0.5 | 0.2 | 0.7804 |
| 0.0002 | 163.0 | 489 | 2.0818 | 0.125 | 0.5 | 0.2 | 0.7755 |
| 0.0001 | 164.0 | 492 | 2.0997 | 0.125 | 0.5 | 0.2 | 0.7755 |
| 0.0001 | 165.0 | 495 | 2.1127 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0001 | 166.0 | 498 | 2.1226 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0001 | 167.0 | 501 | 2.1311 | 0.1111 | 0.5 | 0.1818 | 0.7716 |
| 0.0001 | 168.0 | 504 | 2.1377 | 0.1111 | 0.5 | 0.1818 | 0.7716 |
| 0.0003 | 169.0 | 507 | 2.1296 | 0.1111 | 0.5 | 0.1818 | 0.7716 |
| 0.0001 | 170.0 | 510 | 2.1180 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0 | 171.0 | 513 | 2.1095 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0 | 172.0 | 516 | 2.1035 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0001 | 173.0 | 519 | 2.0999 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0 | 174.0 | 522 | 2.0984 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0004 | 175.0 | 525 | 2.0930 | 0.125 | 0.5 | 0.2 | 0.7755 |
| 0.0 | 176.0 | 528 | 2.0874 | 0.125 | 0.5 | 0.2 | 0.7755 |
| 0.006 | 177.0 | 531 | 2.0807 | 0.125 | 0.5 | 0.2 | 0.7755 |
| 0.0001 | 178.0 | 534 | 2.0717 | 0.125 | 0.5 | 0.2 | 0.7755 |
| 0.0001 | 179.0 | 537 | 2.0721 | 0.125 | 0.5 | 0.2 | 0.7755 |
| 0.0001 | 180.0 | 540 | 2.0731 | 0.125 | 0.5 | 0.2 | 0.7755 |
| 0.001 | 181.0 | 543 | 2.0712 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0001 | 182.0 | 546 | 2.0700 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0001 | 183.0 | 549 | 2.0692 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0001 | 184.0 | 552 | 2.0622 | 0.125 | 0.5 | 0.2 | 0.7755 |
| 0.0006 | 185.0 | 555 | 2.0582 | 0.125 | 0.5 | 0.2 | 0.7755 |
| 0.0001 | 186.0 | 558 | 2.0564 | 0.125 | 0.5 | 0.2 | 0.7755 |
| 0.0001 | 187.0 | 561 | 2.0562 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0001 | 188.0 | 564 | 2.0571 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0001 | 189.0 | 567 | 2.0587 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0001 | 190.0 | 570 | 2.0597 | 0.1111 | 0.5 | 0.1818 | 0.7735 |
| 0.0041 | 191.0 | 573 | 2.0555 | 0.125 | 0.5 | 0.2 | 0.7745 |
| 0.0002 | 192.0 | 576 | 2.0473 | 0.1111 | 0.5 | 0.1818 | 0.7765 |
| 0.0001 | 193.0 | 579 | 2.0480 | 0.1111 | 0.5 | 0.1818 | 0.7765 |
| 0.0002 | 194.0 | 582 | 2.0507 | 0.1111 | 0.5 | 0.1818 | 0.7765 |
| 0.0001 | 195.0 | 585 | 2.0541 | 0.1111 | 0.5 | 0.1818 | 0.7765 |
| 0.0001 | 196.0 | 588 | 2.0562 | 0.1111 | 0.5 | 0.1818 | 0.7765 |
| 0.0001 | 197.0 | 591 | 2.0586 | 0.1111 | 0.5 | 0.1818 | 0.7765 |
| 0.0001 | 198.0 | 594 | 2.0596 | 0.1111 | 0.5 | 0.1818 | 0.7765 |
| 0.0001 | 199.0 | 597 | 2.0604 | 0.1111 | 0.5 | 0.1818 | 0.7765 |
| 0.0003 | 200.0 | 600 | 2.0573 | 0.1111 | 0.5 | 0.1818 | 0.7765 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Tri1/12-18-finetuned-eng-to-para
|
Tri1
| 2023-06-28T10:34:13Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-27T16:09:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: 12-18-finetuned-eng-to-para
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 12-18-finetuned-eng-to-para
This model is a fine-tuned version of [Tri1/6-12-finetuned-eng-to-para](https://huggingface.co/Tri1/6-12-finetuned-eng-to-para) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0339
- Bleu: 35.7233
- Gen Len: 22.56
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.0441 | 1.0 | 6250 | 0.0420 | 31.1327 | 22.08 |
| 0.0412 | 2.0 | 12500 | 0.0397 | 32.419 | 22.72 |
| 0.0396 | 3.0 | 18750 | 0.0371 | 35.4708 | 20.96 |
| 0.0385 | 4.0 | 25000 | 0.0356 | 34.6037 | 21.12 |
| 0.0377 | 5.0 | 31250 | 0.0343 | 35.7663 | 22.24 |
| 0.0381 | 6.0 | 37500 | 0.0339 | 35.7233 | 22.56 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
NasimB/gpt2-dp-cl-rarity
|
NasimB
| 2023-06-28T10:27:45Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-28T08:00:14Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-dp-cl-rarity
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-dp-cl-rarity
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7749 | 0.27 | 500 | 5.8089 |
| 5.4247 | 0.53 | 1000 | 5.4313 |
| 5.0622 | 0.8 | 1500 | 5.2244 |
| 4.7874 | 1.07 | 2000 | 5.1033 |
| 4.5968 | 1.33 | 2500 | 5.0154 |
| 4.5038 | 1.6 | 3000 | 4.9414 |
| 4.4187 | 1.87 | 3500 | 4.8633 |
| 4.2538 | 2.13 | 4000 | 4.8506 |
| 4.1481 | 2.4 | 4500 | 4.8187 |
| 4.1203 | 2.67 | 5000 | 4.7835 |
| 4.0912 | 2.93 | 5500 | 4.7395 |
| 3.8956 | 3.2 | 6000 | 4.7446 |
| 3.8652 | 3.47 | 6500 | 4.7275 |
| 3.8507 | 3.74 | 7000 | 4.7142 |
| 3.8345 | 4.0 | 7500 | 4.6988 |
| 3.5879 | 4.27 | 8000 | 4.7267 |
| 3.5976 | 4.54 | 8500 | 4.7226 |
| 3.5887 | 4.8 | 9000 | 4.7138 |
| 3.5245 | 5.07 | 9500 | 4.7321 |
| 3.3551 | 5.34 | 10000 | 4.7423 |
| 3.3637 | 5.6 | 10500 | 4.7457 |
| 3.3497 | 5.87 | 11000 | 4.7483 |
| 3.2842 | 6.14 | 11500 | 4.7694 |
| 3.21 | 6.4 | 12000 | 4.7680 |
| 3.2133 | 6.67 | 12500 | 4.7721 |
| 3.2151 | 6.94 | 13000 | 4.7719 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
YakovElm/Qt_10_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T10:22:56Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T10:22:21Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt_10_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt_10_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0001
- Train Accuracy: 1.0
- Validation Loss: 0.5572
- Validation Accuracy: 0.9416
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0351 | 0.9905 | 0.4427 | 0.9416 | 0 |
| 0.0003 | 1.0 | 0.5263 | 0.9416 | 1 |
| 0.0001 | 1.0 | 0.5572 | 0.9416 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
KamilAin/bart-base-booksum
|
KamilAin
| 2023-06-28T10:19:50Z | 129 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"booksum",
"summary",
"summarization",
"book",
"en",
"dataset:kmfoda/booksum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-05-23T13:46:59Z |
---
language: en
license: apache-2.0
tags:
- booksum
- summary
- summarization
- book
metrics:
- rouge
widget:
- text: "In the dead night, Frodo lay in a dream without light. Then he saw the young moon rising; under its thin light there loomed before him a black wall of rock, pierced by a dark arch like a great gate. It seemed to Frodo that he was lifted up, and passing over he saw that the rock-wall was a circle of hills, and that within it was a plain, and in the midst of the plain stood a pinnacle of stone, like a vast tower but not made by hands. On its top stood the figure of a man. The moon as it rose seemed to hang for a moment above his head and glistened in his white hair as the wind stirred it. Up from the dark plain below came the crying of fell voices, and the howling of many wolves. Suddenly a shadow, like the shape of great wings, passed across the moon. The figure lifted his arms and a light flashed from the staff that he wielded. A mighty eagle swept down and bore him away. The voices wailed and the wolves yammered. There was a noise like a strong wind blowing, and on it was borne the sound of hoofs, galloping, galloping, galloping from the East. ‘Black Riders!’ thought Frodo as he wakened, with the sound of the hoofs still echoing in his mind. He wondered if he would ever again have the courage to leave the safety of these stone walls. He lay motionless, still listening; but all was now silent, and at last he turned and fell asleep again or wandered into some other unremembered dream."
example_title: "book example"
datasets:
- kmfoda/booksum
---
# BART-base-Booksum
This is a BART-base model fine-tuned on a BookSum dataset
- **Use cases:** book summarization, general text summarization.
- This is a [`https://huggingface.co/facebook/bart-base`](https://huggingface.co/facebook/bart-base), fine-tuned for five epochs
|
NickyNicky/MPT-7b-instruct-QLora-8Bits-Peft-train_eli5-1_Epoch-V5
|
NickyNicky
| 2023-06-28T10:19:47Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-28T10:19:42Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
jondurbin/airoboros-7b-gpt4-1.4
|
jondurbin
| 2023-06-28T10:16:41Z | 1,427 | 8 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.4",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T00:22:59Z |
---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4-1.4
---
__mostly untested, use if you want, or wait for some validation__
## Overview
This is a __full__ (not qlora) fine-tune 7b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of the previous gpt-4 series, with a few extras:
* fixed (+ more examples of) multi-character, multi-turn conversations
* coding examples in 10 languages from rosettacode.org dataset thanks to Mike aka kryptkpr: https://huggingface.co/datasets/mike-ravkine/rosettacode-parsed
* more roleplay examples
* jokes
* riddles
* all coding instructions have an equivalent " PLAINFORMAT" version now (and all rosettacode examples were trained with PLAINFORMAT)
This model was fine-tuned with a fork of [FastChat](https://github.com/jondurbin/FastChat)
The prompt it was trained with was:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
## Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-7b-gpt4-1.4 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
For multi-turn conversations and chatting, you'll want to remove the `--no-history` option.
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described:
```
USER: BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
ASSISTANT:
```
### Coding
You can ask for fairly complex coding instructions, e.g. (be sure to wrap with vicuna style prompt):
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
### Word games / trivia
```
USER: This anagram of "silent" is what you do when paying attention and hearing someone.
ASSISTANT: listen
```
```
USER: Give me a list of 7 words that start with EN
ASSISTANT: 1. Endlessly
2. Enthusiastically
3. Enormously
4. Enveloping
5. Enduring
6. Enlightening
7. Encouraging
```
### Multiple choice
```
USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt?
A) Mercury
B) Venus
C) Pluto
D) Jupiter
ASSISTANT: C, Pluto
```
### Writing
<details>
<summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary>
Response:
```
ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home.
Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes.
No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice.
So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary>
```
</details>
### Jokes
```
USER: Tell me a joke about bananas.
ASSISTANT: What did the banana say when it got hit by a bus? R.I.P.E.
```
### Riddles (not great, but slightly better)
```
USER: What has a neck but no head?
ASSISTANT: A bottle.
```
### Multi-character conversations
```
USER: Characters:
- Yoda, the ever clever jedi master.
- Homer Simpson, you know, the goofy yellow guy.
- USER/Jon - don't generate any speech or actions on behalf of Jon
Rules:
- be sure to use the manerisms and speaking style of the characters correctly
- surround character actions with asterisks
- always put quotes around words spoken by the characters
- use a new line for each character's actions or speech
- always prefix actions/speech with the character's name and colon
- only generate text and actions on behalf of Yoda or Homer, and never anyone else
Conversation will revolve around the grapes, in a local cafe with delicious coffee.
Generate a single interaction between Yoda and Homer to start, with one sentence each.
ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer."
Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!"
USER: *enters the cafe* Sorry I'm late guys!
ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes."
Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!"
*Yoda raises an eyebrow*
```
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially.
|
jmjpickard/test-model
|
jmjpickard
| 2023-06-28T10:03:21Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-28T10:00:00Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# /var/folders/p2/9pl12g015n18blt1xbr70qww0000gn/T/tmp6j9vcf6u/jmjpickard/test-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("/var/folders/p2/9pl12g015n18blt1xbr70qww0000gn/T/tmp6j9vcf6u/jmjpickard/test-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
vkoriukina/ppo-LunarLander-v2
|
vkoriukina
| 2023-06-28T10:01:31Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T10:01:12Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 237.68 +/- 27.59
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dev-senolys/camembert_base_finetunned_one_thema_balanced_5_epochs
|
dev-senolys
| 2023-06-28T10:00:36Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"camembert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T08:45:28Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: camembert_base_finetunned_one_thema_balanced_5_epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert_base_finetunned_one_thema_balanced_5_epochs
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 116 | 2.3340 |
| No log | 2.0 | 232 | 2.0439 |
| No log | 3.0 | 348 | 1.8438 |
| No log | 4.0 | 464 | 1.8124 |
| 1.9931 | 5.0 | 580 | 1.7778 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
NickyNicky/MPT-7b-instruct-QLora-8Bits-Peft-train_eli5-1_Epoch-V4
|
NickyNicky
| 2023-06-28T09:58:18Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-28T09:58:15Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
fatcat22/a2c-AntBulletEnv-v0
|
fatcat22
| 2023-06-28T09:52:03Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T09:45:49Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1234.32 +/- 63.32
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YakovElm/Hyperledger_10_BERT_Over_Sampling
|
YakovElm
| 2023-06-28T09:45:17Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T09:44:41Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger_10_BERT_Over_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger_10_BERT_Over_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0332
- Train Accuracy: 0.9892
- Validation Loss: 0.7790
- Validation Accuracy: 0.8091
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5431 | 0.7045 | 0.5796 | 0.7261 | 0 |
| 0.1339 | 0.9515 | 0.6705 | 0.7905 | 1 |
| 0.0332 | 0.9892 | 0.7790 | 0.8091 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Zhibinhong/checkpoints_new
|
Zhibinhong
| 2023-06-28T09:45:08Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-28T09:03:53Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Zhibinhong/checkpoints_new
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
catrabbitbear/ppo-SnowballTarget
|
catrabbitbear
| 2023-06-28T09:44:33Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-28T09:44:30Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: catrabbitbear/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
KorinYTUK/JTTS.v1
|
KorinYTUK
| 2023-06-28T09:42:19Z | 0 | 0 | null |
[
"text-to-speech",
"en",
"license:openrail",
"region:us"
] |
text-to-speech
| 2023-06-28T09:41:16Z |
---
license: openrail
language:
- en
pipeline_tag: text-to-speech
---
|
gozu888/Envit5-tuned
|
gozu888
| 2023-06-28T09:42:05Z | 24 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"translation",
"en-vi",
"vi-en",
"en",
"vi",
"dataset:mt_eng_vietnamese",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-01-06T14:21:02Z |
---
license: apache-2.0
datasets:
- mt_eng_vietnamese
language:
- en
- vi
tags:
- translation
- t5
- en-vi
- vi-en
---
# gozu888/Envit5-tuned
This model is a fine-tuned version of VietAI/envit5-translation on the mt_eng_vietnamese dataset. It achieves the following results on the evaluation set:
- Val Loss: 0.8422
- BLEU: 25.14
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** gozu888
- **Model type:** T5
- **Finetuned from model [optional]:** [VietAI/envit5-translation](https://huggingface.co/VietAI/envit5-translation)
### Hyperparameters
The following hyperparameters were used during training:
- Repo: [simpleT5](https://github.com/Shivanandroy/simpleT5)
- Dataset: [mt_eng_vietnamese](https://huggingface.co/datasets/mt_eng_vietnamese) + custom (~40%)
```python
source_max_token_len=128
target_max_token_len=128
batch_size=32
max_epochs=8
```
### Examples
+ This fine-tuned model, which was trained using only 128 token lengths, is most effective at sentence-by-sentence translation.
```json
[
"en: I do not believe a human being can be creative.",
"en: If we perceive creation around us with a certain profoundness, we can imitate in many different ways, in permutations and combinations, and seem creative in society, but actually, we are not really creative.",
"en: Everything that can be created has already been done in creation.",
"en: We are clever craftsmen at the most.",
"en: If you define the word \“creativity\” as really creating something – whether you make a movie, paint something, build a building, speak, or whatever else – this is not really creative – it is clever imitation.",
"en: Because we have paid attention to different aspects of life, we are able to imitate in ways that others have not thought possible.",
]
```
```json
[
"vi: Tôi không tin rằng một con người có thể sáng tạo.",
"vi: Nếu chúng ta nhận biết được sáng tạo xung quanh chúng ta với một thâm sâu nào đó, chúng ta có thể bắt chước trong nhiều cách khác nhau, trong những hoán vị và những kết hợp, và dường như sáng tạo trong xã hội, nhưng thực ra, chúng ta không thực sự sáng tạo. ",
"vi: Mọi thứ có thể được tạo ra đều đã được thực hiện trong sáng tạo.",
"vi: Chúng ta là những thợ thủ công thông minh nhất.",
"vi: Nếu bạn định nghĩa từ \"sáng tạo\" là thực sự sáng tạo ra một cái gì đó - cho dù bạn làm một bộ phim, vẽ một cái gì đó, xây dựng một toà nhà, nói chuyện, hoặc bất cứ điều gì khác - đây không thực sự là sáng tạo - nó là sự bắt chước thông minh.",
"vi: Bởi vì chúng ta đã chú tâm đến những khía cạnh khác nhau của cuộc sống, chúng ta có thể bắt chước theo những cách mà những người khác không nghĩ là có thể.",
]
```
|
SHENMU007/neunit_BASE_V10.10
|
SHENMU007
| 2023-06-28T09:37:45Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-06-28T06:34:14Z |
---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Nara-Lab/nallm-polyglot-ko-1.3b-base
|
Nara-Lab
| 2023-06-28T09:24:15Z | 2,272 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"ko",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T01:12:03Z |
---
license: mit
language:
- ko
---
NA-LLM(나름)은 나라지식정보가 개발한 한국어 Large Language Model (LLM) 입니다.
https://github.com/Nara-Information/NA-LLM
|
NickyNicky/MPT-7b-instruct-QLora-8Bits-Peft-train_eli5-1_Epoch-V3
|
NickyNicky
| 2023-06-28T09:20:50Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-28T09:20:47Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
arnodd/ppo-LunarLander-v2
|
arnodd
| 2023-06-28T09:16:32Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T09:16:07Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.48 +/- 22.78
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
robrecht/ppo-lunar-lander-v1
|
robrecht
| 2023-06-28T09:15:03Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T09:05:01Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 273.56 +/- 13.05
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Matthijs/mms-tts-abi
|
Matthijs
| 2023-06-28T09:13:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-06-27T14:00:05Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS) : Text-to-Speech Models
This repository contains the **Abidji (abi)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage
Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html).
## Usage
Using this checkpoint from Hugging Face Transformers:
```python
from transformers import VitsModel, VitsMmsTokenizer
import torch
model = VitsModel.from_pretrained("Matthijs/mms-tts-abi")
tokenizer = VitsMmsTokenizer.from_pretrained("Matthijs/mms-tts-abi")
text = "some example text in the Abidji language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs)
from IPython.display import Audio
Audio(output.audio[0], rate=16000)
```
## Model credits
This model was developed by Vineel Pratap et al. and is licensed as **CC-BY-NC 4.0**
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
|
elementoffear/ppo-Huggy
|
elementoffear
| 2023-06-28T09:12:35Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-28T09:12:29Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: elementoffear/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hztang/BART-large-base-custom
|
hztang
| 2023-06-28T08:57:46Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-28T01:48:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: BART-large-base-custom
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BART-large-base-custom
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 238 | 0.6719 | 24.2424 | 19.2095 | 23.2835 | 23.9833 | 20.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.12.1
- Datasets 2.13.1
- Tokenizers 0.11.0
|
NickyNicky/MPT-7b-instruct-QLora-8Bits-Peft-train_eli5-1_Epoch
|
NickyNicky
| 2023-06-28T08:52:05Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-28T08:52:01Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
Matthijs/mms-tts-abp
|
Matthijs
| 2023-06-28T08:45:05Z | 108 | 1 |
transformers
|
[
"transformers",
"pytorch",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-06-27T14:00:22Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS) : Text-to-Speech Models
This repository contains the **Ayta, Abellen (abp)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage
Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html).
## Usage
Using this checkpoint from Hugging Face Transformers:
```python
from transformers import VitsModel, VitsMmsTokenizer
import torch
model = VitsModel.from_pretrained("Matthijs/mms-tts-abp")
tokenizer = VitsMmsTokenizer.from_pretrained("Matthijs/mms-tts-abp")
text = "some example text in the Ayta, Abellen language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs)
from IPython.display import Audio
Audio(output.audio[0], rate=16000)
```
Note: For certain checkpoints, the input text must be converted to the Latin alphabet first using the
[uroman](https://github.com/isi-nlp/uroman) tool.
## Model credits
This model was developed by Vineel Pratap et al. and is licensed as **CC-BY-NC 4.0**
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
|
joydeeph/ppo-LunarLander-v2
|
joydeeph
| 2023-06-28T08:42:25Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T08:41:59Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.93 +/- 21.40
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
cupcakeDriveby/q-taxi-v3
|
cupcakeDriveby
| 2023-06-28T08:39:16Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T08:39:14Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="cupcakeDriveby/q-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
padmajabfrl/Religion-Classification-Custom-Model
|
padmajabfrl
| 2023-06-28T08:35:16Z | 26 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T06:34:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Religion-Classification-Custom-Model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Religion-Classification-Custom-Model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0009
- Accuracy: 0.9999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0141 | 1.0 | 5102 | 0.0026 | 0.9995 |
| 0.0033 | 2.0 | 10204 | 0.0015 | 0.9996 |
| 0.0024 | 3.0 | 15306 | 0.0021 | 0.9996 |
| 0.0005 | 4.0 | 20408 | 0.0007 | 0.9999 |
| 0.0013 | 5.0 | 25510 | 0.0009 | 0.9999 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
yhna/Reinforce-policy-grad-cartpole
|
yhna
| 2023-06-28T08:35:03Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T04:29:48Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-policy-grad-cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
cupcakeDriveby/q-FrozenLake-v1-4x4-noSlippery
|
cupcakeDriveby
| 2023-06-28T08:30:10Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T08:30:08Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="cupcakeDriveby/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
GretaClementi96/blip2-opt-2.7b-inbreast-captions-adapters
|
GretaClementi96
| 2023-06-28T08:25:49Z | 22 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-21T19:50:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
mszpro/CoreML_StableDiffusion-v-1-5_ControlledUnet_SplitEinsum
|
mszpro
| 2023-06-28T08:17:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-28T08:13:17Z |
# Stable Diffusion 1.5 ControlledUnet file for CoreML
This is the compiled ControlledUNet file of Stable Diffusion 1.5, used for CoreML image generation with ControlNet.
It accepts 12 additional_residual inputs.
This is compiled for SPLIT EINSUM, which means it also works for iPhone and iPad using Apple Neural Engine.
|
MU-NLPC/calc-baseline-t5-xl
|
MU-NLPC
| 2023-06-28T08:10:50Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-24T09:44:44Z |
This is a baseline model for our [calculator-assisted models](https://huggingface.co/models?search=emnlp2023)
trained on a mixture of all our [Calc-X datasets](https://huggingface.co/datasets?search=emnlp2023).
See the corresponding paper for details.
The reported results can be reproduced by using [evaluation script](https://github.com/emnlp2023sub/gadgets/blob/65e24e810cf5ea20aceb8a3c8ddbc19f035ab694/examples/test_calc.py)
from the project repository.
|
MU-NLPC/calc-baseline-t5-large
|
MU-NLPC
| 2023-06-28T08:09:37Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-23T15:31:51Z |
This is a baseline model for our [calculator-assisted models](https://huggingface.co/models?search=emnlp2023)
trained on a mixture of all our [Calc-X datasets](https://huggingface.co/datasets?search=emnlp2023).
See the corresponding paper for details.
The reported results can be reproduced by using [evaluation script](https://github.com/emnlp2023sub/gadgets/blob/65e24e810cf5ea20aceb8a3c8ddbc19f035ab694/examples/test_calc.py)
from the project repository.
|
OverShifted/ppo-LunarLander-v2
|
OverShifted
| 2023-06-28T07:48:00Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T07:47:36Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 230.07 +/- 24.56
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YakovElm/Hyperledger_5_BERT_Over_Sampling
|
YakovElm
| 2023-06-28T07:47:59Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T07:47:23Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger_5_BERT_Over_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger_5_BERT_Over_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0898
- Train Accuracy: 0.9727
- Validation Loss: 0.8739
- Validation Accuracy: 0.7521
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.6014 | 0.6783 | 0.5787 | 0.7168 | 0 |
| 0.2834 | 0.8865 | 0.7050 | 0.7687 | 1 |
| 0.0898 | 0.9727 | 0.8739 | 0.7521 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
yhna/Reinforce-pixelcopter
|
yhna
| 2023-06-28T07:43:10Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T07:43:05Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 44.60 +/- 27.12
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Assem-Ihab/trainingthemodel
|
Assem-Ihab
| 2023-06-28T07:34:52Z | 94 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-27T12:21:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: trainingthemodel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trainingthemodel
This model is a fine-tuned version of [abdalrahmanshahrour/arabartsummarization](https://huggingface.co/abdalrahmanshahrour/arabartsummarization) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2622
- Rouge1: 0.1194
- Rouge2: 0.0483
- Rougel: 0.0975
- Rougelsum: 0.0977
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 154 | 2.7120 | 0.1178 | 0.0452 | 0.0966 | 0.0966 | 20.0 |
| No log | 2.0 | 308 | 2.4175 | 0.1202 | 0.0482 | 0.0995 | 0.0998 | 20.0 |
| No log | 3.0 | 462 | 2.3151 | 0.1181 | 0.0468 | 0.0963 | 0.0966 | 20.0 |
| 3.034 | 4.0 | 616 | 2.2622 | 0.1194 | 0.0483 | 0.0975 | 0.0977 | 20.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
YakovElm/MariaDB_15_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T07:34:27Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T07:33:52Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaDB_15_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaDB_15_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0009
- Train Accuracy: 1.0
- Validation Loss: 0.3045
- Validation Accuracy: 0.9598
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0769 | 0.9849 | 0.2493 | 0.9598 | 0 |
| 0.0017 | 1.0 | 0.2868 | 0.9598 | 1 |
| 0.0009 | 1.0 | 0.3045 | 0.9598 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
nolanaatama/rmfbtsrvcv1tgrll14
|
nolanaatama
| 2023-06-28T07:30:49Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-28T07:26:26Z |
---
license: creativeml-openrail-m
---
|
swardiantara/drone-term-extractor
|
swardiantara
| 2023-06-28T07:22:35Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"drone",
"drone forensics",
"named entity recognition",
"en",
"license:gpl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-23T09:37:03Z |
---
license: gpl
language:
- en
metrics:
- accuracy
- precision
- recall
- f1
pipeline_tag: token-classification
tags:
- drone
- drone forensics
- named entity recognition
widget:
- text: "Compass abnormal. Solution: 1. Ensure there are no magnets or metal objects near the aircraft. The ground or walls may contain metal. Move away from sources of interference before attempting flight. 2. Calibrate Compass Before Takeoff"
example_title: "Example 1"
- text: "The flight attitude angle is larger in Sport mode. The gimbal will rotate when the aircraft starts or stops. Use Normal mode if required for stable shooting"
example_title: "Example 2"
- text: "Motor speed error. Land or return to home promptly. After powering off the aircraft, replace the propeller on the beeping ESC. If the issue persists, contact DJI Support"
example_title: "Example 3"
- text: "GPS signal low. Aircraft unable to auto hover and takeoff restricted. Move to environment with adequate light. Unlocking takeoff restrictions not recommended"
example_title: "Example 4"
---
|
kejolong/etomisa
|
kejolong
| 2023-06-28T07:21:52Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-28T04:29:02Z |
---
license: creativeml-openrail-m
---
|
YakovElm/MariaDB_10_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T07:09:43Z | 52 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T07:09:06Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaDB_10_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaDB_10_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0013
- Train Accuracy: 1.0
- Validation Loss: 0.3394
- Validation Accuracy: 0.9523
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0724 | 0.9958 | 0.2766 | 0.9523 | 0 |
| 0.0024 | 1.0 | 0.3180 | 0.9523 | 1 |
| 0.0013 | 1.0 | 0.3394 | 0.9523 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
YakovElm/MariaDB_5_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T06:45:06Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T06:44:27Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaDB_5_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaDB_5_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0014
- Train Accuracy: 1.0
- Validation Loss: 0.4829
- Validation Accuracy: 0.9322
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0727 | 0.9808 | 0.3510 | 0.9322 | 0 |
| 0.0037 | 1.0 | 0.4349 | 0.9322 | 1 |
| 0.0014 | 1.0 | 0.4829 | 0.9322 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Broonion/RLcourse-unit2-q-FrozenLake-v1-4x4-noSlippery
|
Broonion
| 2023-06-28T06:43:48Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T06:43:46Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Broonion/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AlonCohen/social-groups-ner-first-try
|
AlonCohen
| 2023-06-28T06:24:31Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-12-29T08:08:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: social-groups-ner-first-try
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# social-groups-ner-first-try
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Sidharthkr/MPT-7b-chat-GGML
|
Sidharthkr
| 2023-06-28T06:20:54Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-06-28T05:41:12Z |
Compatibilty
These files are not compatible with llama.cpp.
Currently they can be used with:
KoboldCpp, a powerful inference engine based on llama.cpp, with good UI: KoboldCpp
The ctransformers Python library, which includes LangChain support: ctransformers
The GPT4All-UI which uses ctransformers: GPT4All-UI
rustformers' llm
The example mpt binary provided with ggml
|
YakovElm/Jira_20_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T06:20:21Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T06:19:46Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira_20_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira_20_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0014
- Train Accuracy: 1.0
- Validation Loss: 0.4661
- Validation Accuracy: 0.9338
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1013 | 0.9780 | 0.3644 | 0.9338 | 0 |
| 0.0030 | 1.0 | 0.4356 | 0.9338 | 1 |
| 0.0014 | 1.0 | 0.4661 | 0.9338 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
alexshengzhili/llava-7bv0-mm-projector-ft-with-ocr-caption-prompted-paragraph
|
alexshengzhili
| 2023-06-28T06:16:48Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llava",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-28T05:25:30Z |
---
license: mit
---
This is the feature alignment pre-training work to train only only the multi-modal projector.
"Predict" paragraph given caption, ocr and image token
|
YakovElm/Jira_15_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T06:00:12Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T05:59:37Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira_15_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira_15_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0012
- Train Accuracy: 1.0
- Validation Loss: 3.4399
- Validation Accuracy: 0.5205
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1521 | 0.9675 | 2.5971 | 0.5205 | 0 |
| 0.0028 | 1.0 | 3.2106 | 0.5205 | 1 |
| 0.0012 | 1.0 | 3.4399 | 0.5205 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
YakovElm/Apache_20_BERT_Over_Sampling
|
YakovElm
| 2023-06-28T05:51:52Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T05:51:17Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache_20_BERT_Over_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache_20_BERT_Over_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0090
- Train Accuracy: 0.9975
- Validation Loss: 0.8778
- Validation Accuracy: 0.8677
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4086 | 0.7859 | 0.5535 | 0.7992 | 0 |
| 0.0378 | 0.9894 | 0.6761 | 0.8872 | 1 |
| 0.0090 | 0.9975 | 0.8778 | 0.8677 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
alexshengzhili/llava-fte2e-scicap-w-mentions-390K-440MB
|
alexshengzhili
| 2023-06-28T05:44:43Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llava",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-26T07:57:45Z |
This model is formulated to 'predict' caption given image and mentioned paragraph
Trains on alexshengzhili/llava-SciCapplus-w-mentions[https://huggingface.co/datasets/alexshengzhili/llava-SciCapplus-w-mentions/tree/main]
|
YakovElm/Jira_10_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T05:40:02Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T05:39:26Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira_10_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira_10_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0015
- Train Accuracy: 1.0
- Validation Loss: 3.4990
- Validation Accuracy: 0.4921
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0884 | 0.9895 | 2.6907 | 0.4921 | 0 |
| 0.0032 | 1.0 | 3.2542 | 0.4921 | 1 |
| 0.0015 | 1.0 | 3.4990 | 0.4921 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Broonion/RLcourse-unit1bonus-ppo-Huggy
|
Broonion
| 2023-06-28T05:38:23Z | 28 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-28T05:38:14Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Broonion/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
rizkyds/bert-phb
|
rizkyds
| 2023-06-28T05:28:11Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"code",
"id",
"dataset:Wikidepia/IndoSQuAD",
"license:cc",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-12T03:25:52Z |
---
license: cc
datasets:
- Wikidepia/IndoSQuAD
language:
- id
metrics:
- bertscore
- accuracy
pipeline_tag: question-answering
tags:
- code
---
|
YakovElm/Jira_5_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T05:20:05Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T05:19:25Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira_5_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira_5_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0017
- Train Accuracy: 1.0
- Validation Loss: 3.5486
- Validation Accuracy: 0.4858
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1443 | 0.9748 | 2.5459 | 0.4858 | 0 |
| 0.0042 | 1.0 | 3.2782 | 0.4858 | 1 |
| 0.0017 | 1.0 | 3.5486 | 0.4858 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
allman/bert-base-uncased-finetuned-MeIA-AnalisisDeSentimientos
|
allman
| 2023-06-28T05:07:52Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T04:02:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-base-uncased-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-MeIA-AnalisisDeSentimientos
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0784
- F1: 0.5207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1946 | 1.0 | 766 | 1.1920 | 0.4331 |
| 1.0715 | 2.0 | 1532 | 1.0936 | 0.4909 |
| 1.0093 | 3.0 | 2298 | 1.0583 | 0.5116 |
| 0.8907 | 4.0 | 3064 | 1.0784 | 0.5207 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
YakovElm/IntelDAOS_20_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T05:00:03Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T04:59:27Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IntelDAOS_20_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IntelDAOS_20_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0011
- Train Accuracy: 1.0
- Validation Loss: 0.6607
- Validation Accuracy: 0.9099
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0684 | 0.9990 | 0.5155 | 0.9099 | 0 |
| 0.0023 | 1.0 | 0.6168 | 0.9099 | 1 |
| 0.0011 | 1.0 | 0.6607 | 0.9099 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
eliztmq/q-taxi
|
eliztmq
| 2023-06-28T04:56:12Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T04:56:10Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="eliztmq/q-taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Poonnnnnnnn/type-prediction-transformer
|
Poonnnnnnnn
| 2023-06-28T04:44:24Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"camembert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-12T07:21:22Z |
---
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: type-prediction-transformer
results: []
widget:
- text: "ถนนผุพังทำให้เกิดเสียงดังเวลารถวิ่ง"
- text: "ขี่มอไซค์บนทางเท้ามันจะเกินปุยมุ้ย"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# type-prediction-transformer
This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0497
- F1: 0.8651
- Roc Auc: 0.9260
- Accuracy: 0.8208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 149 | 0.0812 | 0.8070 | 0.8677 | 0.7588 |
| No log | 2.0 | 298 | 0.0591 | 0.8585 | 0.9064 | 0.8141 |
| No log | 3.0 | 447 | 0.0493 | 0.8719 | 0.9144 | 0.8258 |
| 0.0886 | 4.0 | 596 | 0.0506 | 0.8614 | 0.9222 | 0.8090 |
| 0.0886 | 5.0 | 745 | 0.0487 | 0.8683 | 0.9255 | 0.8174 |
| 0.0886 | 6.0 | 894 | 0.0506 | 0.8693 | 0.9291 | 0.8191 |
| 0.0254 | 7.0 | 1043 | 0.0519 | 0.8619 | 0.9307 | 0.8090 |
| 0.0254 | 8.0 | 1192 | 0.0497 | 0.8651 | 0.9260 | 0.8208 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
|
YakovElm/IntelDAOS_15_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T04:39:11Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T04:38:33Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IntelDAOS_15_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IntelDAOS_15_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0015
- Train Accuracy: 1.0
- Validation Loss: 0.8058
- Validation Accuracy: 0.8859
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1334 | 0.9790 | 0.5877 | 0.8859 | 0 |
| 0.0037 | 1.0 | 0.7378 | 0.8859 | 1 |
| 0.0015 | 1.0 | 0.8058 | 0.8859 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
limcheekin/fastchat-t5-3b-ct2
|
limcheekin
| 2023-06-28T04:36:38Z | 14 | 2 |
transformers
|
[
"transformers",
"ctranslate2",
"fastchat-t5-3b",
"quantization",
"int8",
"text2text-generation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-31T06:51:36Z |
---
license: apache-2.0
language:
- en
tags:
- ctranslate2
- fastchat-t5-3b
- quantization
- int8
pipeline_tag: text2text-generation
---
# Model Card for FastChat-T5 3B Q8
The model is quantized version of the [lmsys/fastchat-t5-3b-v1.0](https://huggingface.co/lmsys/fastchat-t5-3b-v1.0) with int8 quantization.
## Model Details
### Model Description
The model being quantized using [CTranslate2](https://opennmt.net/CTranslate2/) with the following command:
```
ct2-transformers-converter --model lmsys/fastchat-t5-3b --output_dir lmsys/fastchat-t5-3b-ct2 --copy_files generation_config.json added_tokens.json tokenizer_config.json special_tokens_map.json spiece.model --quantization int8 --force --low_cpu_mem_usage
```
If you want to perform the quantization yourself, you need to install the following dependencies:
```
pip install -qU ctranslate2 transformers[torch] sentencepiece accelerate
```
- **Shared by:** Lim Chee Kin
- **License:** Apache 2.0
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import ctranslate2
import transformers
translator = ctranslate2.Translator("limcheekin/fastchat-t5-3b-ct2")
tokenizer = transformers.AutoTokenizer.from_pretrained("limcheekin/fastchat-t5-3b-ct2")
input_text = "translate English to German: The house is wonderful."
input_tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(input_text))
results = translator.translate_batch([input_tokens])
output_tokens = results[0].hypotheses[0]
output_text = tokenizer.decode(tokenizer.convert_tokens_to_ids(output_tokens))
print(output_text)
```
The code is taken from https://opennmt.net/CTranslate2/guides/transformers.html#t5.
The key method of the code above is `translate_batch`, you can find out [its supported parameters here](https://opennmt.net/CTranslate2/python/ctranslate2.Translator.html#ctranslate2.Translator.translate_batch).
|
yihyeji/hanbok_q
|
yihyeji
| 2023-06-28T04:33:59Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-28T04:06:26Z |
---
license: creativeml-openrail-m
base_model: /workspace/data3/model_checkpoints/DIFFUSION_DB/Diffusion_models/diffusers/v15/chilloutmix_NiPrunedFp16Fix/
instance_prompt: a photo of 1 girl wearing hanbok_q
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - yihyeji/hanbok_q
These are LoRA adaption weights for /workspace/data3/model_checkpoints/DIFFUSION_DB/Diffusion_models/diffusers/v15/chilloutmix_NiPrunedFp16Fix/. The weights were trained on a photo of 1 girl wearing hanbok_q using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
PritamReddy/test-demo
|
PritamReddy
| 2023-06-28T04:31:57Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-27T17:01:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: test-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-demo
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3257
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 7.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 1 | 4.4727 | 0.0 | 0.0 | 0.0 | 0.0 | 7.6667 |
| No log | 2.0 | 2 | 4.4727 | 0.0 | 0.0 | 0.0 | 0.0 | 7.6667 |
| No log | 3.0 | 3 | 4.4727 | 0.0 | 0.0 | 0.0 | 0.0 | 7.6667 |
| No log | 4.0 | 4 | 4.3257 | 0.0 | 0.0 | 0.0 | 0.0 | 7.6667 |
| No log | 5.0 | 5 | 4.3257 | 0.0 | 0.0 | 0.0 | 0.0 | 7.6667 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hoaio/dqn-SpaceInvadersNoFrameskip-v4
|
hoaio
| 2023-06-28T04:12:28Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T04:11:52Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 572.00 +/- 100.70
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hoaio -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hoaio -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga hoaio
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
QuangHuy54/long-t5-tglobal-large-multimedia
|
QuangHuy54
| 2023-06-28T03:56:31Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"longt5",
"text2text-generation",
"generated_from_trainer",
"dataset:multi_news",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-27T12:17:31Z |
---
tags:
- generated_from_trainer
datasets:
- multi_news
model-index:
- name: long-t5-tglobal-large-multimedia
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# long-t5-tglobal-large-multimedia
This model is a fine-tuned version of [QuangHuy54/long-t5-tglobal-large-multimedia](https://huggingface.co/QuangHuy54/long-t5-tglobal-large-multimedia) on the multi_news dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 1 | 2.1163 | 0.3333 | 0.0859 | 0.1667 | 0.1666 | 114.46 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
loghai/q-FrozenLake-v1-4x4-noSlippery
|
loghai
| 2023-06-28T03:50:04Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T03:50:01Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="loghai/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ALPHONSE28/SEMANA10_SINTENTICOS
|
ALPHONSE28
| 2023-06-28T03:45:32Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T03:15:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: SEMANA10_SINTENTICOS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SEMANA10_SINTENTICOS
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3103
- Accuracy: 0.9048
- F1: 0.9259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
chaowu/Reinforce-Pixelcopter-PLE-v0
|
chaowu
| 2023-06-28T03:36:42Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T03:00:11Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 67.30 +/- 46.66
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
YakovElm/Hyperledger_20_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T03:34:38Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T03:34:02Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger_20_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger_20_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0004
- Train Accuracy: 1.0
- Validation Loss: 0.8636
- Validation Accuracy: 0.8983
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0338 | 0.9889 | 0.7164 | 0.8983 | 0 |
| 0.0008 | 1.0 | 0.8060 | 0.8983 | 1 |
| 0.0004 | 1.0 | 0.8636 | 0.8983 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aka7774/frog_bench
|
aka7774
| 2023-06-28T03:27:27Z | 0 | 3 | null |
[
"region:us"
] | null | 2023-02-22T07:21:45Z |
# frog train benchmark
ハローカエル(スペルミスに気を付けよう)
## 概要
- kohya train_networkのベンチです
- サンプルのカエルを使います
- https://note.com/kohya_ss/n/nb20c5187e15a
- https://note.com/api/v2/attachments/download/e3cd9aa39e600cac51e2022eaa01a931
- 中身をこのリポジトリにコピーしてあります
- モデルはSDv1.5を使います
- https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.safetensors
- pruneしたファイルを用意してあります
## 実行
- Windows
- バッチをダウンロードして任意のディレクトリ(日本語やスペースが無いと良い)に置いて実行します
- https://huggingface.co/aka7774/frog_bench/resolve/main/frog_bench.bat
- sd-scriptsやvenvのインストールが行われます
- SDv1.5モデルのダウンロードが行われます
- 学習が行われます
- 推論が行われます sd-scripts/txt2img/ に画像を保存します
- 学習(accelerateコマンド)の所要時間が表示されます
- sd-scripts/result.txt にも保存されます
- Windows以外もしくは環境構築済みの場合
- サンプル通りに学習を実行してaccelerateコマンドにかかったtimeを計測してください
- bitsandbytesでエラーが出る場合
- --use-8bit-adamを外すといいかも
- VRAMの使用量がギリギリの場合
- batch_sizeを下げたほうが速く終わることもあるかも
## 設定
バッチファイルを編集することでいくつかの設定が出来ます。
- bypass Install CUDA Toolkit
- pytorchのlibにPATHを通すことでCUDA Toolkitのインストールを省略する
- Path to
- PythonとgitにPATHが通っていない時にフルパスで指定する
- Pythonとgit自体のインストールは別途必要
- VERS
- 1はkohya推奨バージョン(古い)
- 2は1111推奨バージョン(新しい) xformersが動かない可能性がある
- MODE
- 複数回実行したい時にインストールや学習を飛ばす
- BATCH_SIZE
- VRAMが10GB未満の時に減らす
# 変更点
- num_cpu_threads_per_process(未変更)
- 1のほうがいいらしいけどサンプルが4なのでそのまま
- learning_rate(未変更)
- 途中で仕様変更があったので1桁減らしたほうがいいらしいけどそのまま
- inference(gen_img_diffusers.py)
- 公式のオプションがbf16だったけどA100でしか動かないというエラーが出るのでfp16に変えてあります
- interactiveしなくて済むようにした
- accelerate config
- accelerate launchに引数を追加することで割愛しています
|
beomi/kollama-13b
|
beomi
| 2023-06-28T03:23:51Z | 2,256 | 17 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"KoLLAMA",
"KoreanGPT",
"ko",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-14T01:20:46Z |
---
license: mit
language:
- ko
- en
metrics:
- perplexity
- accuracy
pipeline_tag: text-generation
tags:
- llama
- KoLLAMA
- KoreanGPT
---
> 🚧 Note: this repo is under construction 🚧
## Todo
✅ - finish
⏳ - currently working on it
- ✅ Train new BBPE Tokenizer
- ✅ Test train code on TPUv4 Pods (with model parallel)
- ✅ Converting test (jax to PyTorch)
- ✅ LM train validation on minimal dataset (1 sentence 1000 step)
- ⏳ Build Data Shuffler (curriculum learning)
- ⏳ Train 7B Model
- ⏳ Train 13B Model
- Train 33B Model
- Train 65B Model
# KoLLaMA-13B Model Card
KoLLaMA (13B) trained on Korean/English/Code dataset with LLaMA Architecture via JAX,
with the warm support from [Google TPU Research Cloud program](https://sites.research.google/trc/about/) for providing part of the computation resources.
## Model details
**Researcher developing the model**
Junbum Lee (aka Beomi)
**Model date**
KoLLaMA was trained between 2022.04~
**Model version**
This is alpha version of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
(This repo contains 13B model!)
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
More info for KoAlpaca:
[TBD]
**Citations details**
KoLLAMA: [TBD]
LLAMA: https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
MIT
**Where to send questions or comments about the model**
Questions and comments about KoLLaMA can be sent via the [GitHub repository](https://github.com/beomi/KoLLAMA) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of KoLLaMA is research on Korean Opensource large language models
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
## Evaluation datasets
[TBD]
## Training dataset
[TBD]
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
eliztmq/q-FrozenLake-v1-4x4-noSlippery
|
eliztmq
| 2023-06-28T03:22:30Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-28T03:22:27Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="eliztmq/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tyavika/Bert-QA-Pytorch-Squad2
|
tyavika
| 2023-06-28T01:49:05Z | 131 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-27T23:10:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Bert-QA-Pytorch-Squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bert-QA-Pytorch-Squad2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3031 | 1.0 | 1644 | 1.1841 |
| 0.8636 | 2.0 | 3288 | 1.1397 |
| 0.5304 | 3.0 | 4932 | 1.3452 |
| 0.3194 | 4.0 | 6576 | 1.5532 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
YakovElm/Hyperledger_10_BERT_Under_Sampling
|
YakovElm
| 2023-06-28T01:39:49Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-28T01:39:10Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger_10_BERT_Under_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger_10_BERT_Under_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0004
- Train Accuracy: 1.0
- Validation Loss: 1.1748
- Validation Accuracy: 0.8600
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0466 | 0.9879 | 0.9382 | 0.8600 | 0 |
| 0.0010 | 1.0 | 1.0854 | 0.8600 | 1 |
| 0.0004 | 1.0 | 1.1748 | 0.8600 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
nxquang-al/pretrained-trocr-small-vietnamese-nom
|
nxquang-al
| 2023-06-28T01:25:37Z | 48 | 2 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"vi",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-06-27T02:43:59Z |
---
language:
- vi
metrics:
- cer
- accuracy
---
|
NjinHF/swin-tiny-patch4-window7-224-finetuned-eurosat
|
NjinHF
| 2023-06-28T01:14:42Z | 223 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-05-08T06:03:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.977037037037037
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0731
- Accuracy: 0.9770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.247 | 1.0 | 190 | 0.1200 | 0.9626 |
| 0.2012 | 2.0 | 380 | 0.1026 | 0.9656 |
| 0.1437 | 3.0 | 570 | 0.0731 | 0.9770 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.