modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-10 18:30:15
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 553
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-10 18:29:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
AmbarB12/my_awesome_model
|
AmbarB12
| 2023-07-10T17:30:33Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-09T18:03:55Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: AmbarB12/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# AmbarB12/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0631
- Validation Loss: 0.2229
- Train Accuracy: 0.9306
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2523 | 0.1891 | 0.927 | 0 |
| 0.1327 | 0.2007 | 0.9298 | 1 |
| 0.0631 | 0.2229 | 0.9306 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
cagarraz/rl_course_vizdoom_health_gathering_supreme
|
cagarraz
| 2023-07-10T17:23:21Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T17:23:08Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 3.94 +/- 0.20
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r cagarraz/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
PixelPerfect/PixelPerfectLORAs
|
PixelPerfect
| 2023-07-10T17:14:36Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-06-20T12:34:00Z |
LoRAs with Different Styles for PixelPerfect Text-to-Image Model!
|
NasimB/gpt2-concat-all-new-mod-datasets-rarity-all-iorder-13k-2p6k
|
NasimB
| 2023-07-10T17:09:06Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T15:25:56Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-all-new-mod-datasets-rarity-all-iorder-13k-2p6k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-all-new-mod-datasets-rarity-all-iorder-13k-2p6k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4013
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7847 | 0.32 | 500 | 5.6658 |
| 5.4409 | 0.63 | 1000 | 5.2342 |
| 5.088 | 0.95 | 1500 | 4.9831 |
| 4.8091 | 1.27 | 2000 | 4.8440 |
| 4.6774 | 1.59 | 2500 | 4.7254 |
| 4.5641 | 1.9 | 3000 | 4.6255 |
| 4.3493 | 2.22 | 3500 | 4.5674 |
| 4.2735 | 2.54 | 4000 | 4.5081 |
| 4.2294 | 2.86 | 4500 | 4.4480 |
| 4.0526 | 3.17 | 5000 | 4.4279 |
| 3.9479 | 3.49 | 5500 | 4.4002 |
| 3.9223 | 3.81 | 6000 | 4.3596 |
| 3.8021 | 4.13 | 6500 | 4.3586 |
| 3.6504 | 4.44 | 7000 | 4.3495 |
| 3.6428 | 4.76 | 7500 | 4.3416 |
| 3.58 | 5.08 | 8000 | 4.3470 |
| 3.4494 | 5.4 | 8500 | 4.3484 |
| 3.4443 | 5.71 | 9000 | 4.3455 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
komo-dono/harukatomatsu
|
komo-dono
| 2023-07-10T17:05:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-10T17:03:49Z |
---
license: openrail
language:
- ja
tags:
- music
haruka tomatsu 600 epoch
|
opendiffusion/sentimentcheck
|
opendiffusion
| 2023-07-10T16:58:49Z | 0 | 0 |
tf-keras
|
[
"tf-keras",
"bert",
"region:us"
] | null | 2023-05-11T18:26:04Z |
# Intro
OpenDiffusion's SentimentCheck is an AI model built upon Tensorflow+Keras+Pickles. SentimentCheck harnesses the power of deep learning algorithms to accurately classify sentiment in text, making it a flexible tool for businesses, researchers, and developers.
## Usage
---
language:
- en
- nl
- de
- fr
- it
- es
license: mit
---
# bert-base-multilingual-uncased-sentiment
This a bert-base-multilingual-uncased model finetuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish and Italian. It predicts the sentiment of the review as a number of stars (between 1 and 5).
This model is intended for direct use as a sentiment analysis model for product reviews in any of the six languages above, or for further finetuning on related sentiment analysis tasks.
## Training data
Here is the number of product reviews we used for finetuning the model:
| Language | Number of reviews |
| -------- | ----------------- |
| English | 150k |
| Dutch | 80k |
| German | 137k |
| French | 140k |
| Italian | 72k |
| Spanish | 50k |
## Accuracy
The finetuned model obtained the following accuracy on 5,000 held-out product reviews in each of the languages:
- Accuracy (exact) is the exact match on the number of stars.
- Accuracy (off-by-1) is the percentage of reviews where the number of stars the model predicts differs by a maximum of 1 from the number given by the human reviewer.
| Language | Accuracy (exact) | Accuracy (off-by-1) |
| -------- | ---------------------- | ------------------- |
| English | 67% | 95%
| Dutch | 57% | 93%
| German | 61% | 94%
| French | 59% | 94%
| Italian | 59% | 95%
| Spanish | 58% | 95%
|
Buth/fatuh
|
Buth
| 2023-07-10T16:50:46Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"en",
"dataset:Open-Orca/OpenOrca",
"license:openrail",
"region:us"
] | null | 2023-07-10T16:48:59Z |
---
license: openrail
datasets:
- Open-Orca/OpenOrca
language:
- en
metrics:
- accuracy
library_name: adapter-transformers
---
|
svalcin/q-FrozenLake-v1-4x4-noSlippery
|
svalcin
| 2023-07-10T16:39:14Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T16:39:10Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="svalcin/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jordyvl/vit-small_tobacco3482_kd_MSE
|
jordyvl
| 2023-07-10T16:38:44Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T15:58:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_tobacco3482_kd_MSE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_tobacco3482_kd_MSE
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2131
- Accuracy: 0.84
- Brier Loss: 0.2974
- Nll: 0.8913
- F1 Micro: 0.8400
- F1 Macro: 0.8190
- Ece: 0.2456
- Aurc: 0.0512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 1.4711 | 0.21 | 0.8898 | 6.2752 | 0.2100 | 0.1403 | 0.2702 | 0.7673 |
| No log | 2.0 | 14 | 1.0769 | 0.41 | 0.8120 | 5.2446 | 0.41 | 0.2713 | 0.3253 | 0.5170 |
| No log | 3.0 | 21 | 0.7901 | 0.51 | 0.7057 | 2.6186 | 0.51 | 0.4114 | 0.3359 | 0.3162 |
| No log | 4.0 | 28 | 0.6044 | 0.61 | 0.5736 | 1.8428 | 0.61 | 0.4989 | 0.3358 | 0.1889 |
| No log | 5.0 | 35 | 0.4605 | 0.7 | 0.5009 | 1.3395 | 0.7 | 0.6120 | 0.3587 | 0.1321 |
| No log | 6.0 | 42 | 0.4484 | 0.73 | 0.4373 | 1.4781 | 0.7300 | 0.6394 | 0.2751 | 0.1150 |
| No log | 7.0 | 49 | 0.4406 | 0.765 | 0.4180 | 1.1081 | 0.765 | 0.7193 | 0.3066 | 0.0981 |
| No log | 8.0 | 56 | 0.3421 | 0.82 | 0.3575 | 0.9309 | 0.82 | 0.7764 | 0.2867 | 0.0703 |
| No log | 9.0 | 63 | 0.4201 | 0.75 | 0.3973 | 1.5859 | 0.75 | 0.7562 | 0.2618 | 0.1051 |
| No log | 10.0 | 70 | 0.4086 | 0.795 | 0.3775 | 1.2870 | 0.795 | 0.7701 | 0.3104 | 0.0691 |
| No log | 11.0 | 77 | 0.2867 | 0.82 | 0.3251 | 1.2141 | 0.82 | 0.7996 | 0.2511 | 0.0683 |
| No log | 12.0 | 84 | 0.2964 | 0.825 | 0.3233 | 1.0042 | 0.825 | 0.8028 | 0.2801 | 0.0538 |
| No log | 13.0 | 91 | 0.3010 | 0.81 | 0.3351 | 1.0085 | 0.81 | 0.7735 | 0.2678 | 0.0584 |
| No log | 14.0 | 98 | 0.2741 | 0.835 | 0.3194 | 1.0574 | 0.835 | 0.8127 | 0.2982 | 0.0542 |
| No log | 15.0 | 105 | 0.2524 | 0.845 | 0.3228 | 1.1162 | 0.845 | 0.8225 | 0.2911 | 0.0568 |
| No log | 16.0 | 112 | 0.2652 | 0.83 | 0.3154 | 0.8145 | 0.83 | 0.8130 | 0.2786 | 0.0516 |
| No log | 17.0 | 119 | 0.2478 | 0.83 | 0.3241 | 1.1158 | 0.83 | 0.8034 | 0.2776 | 0.0683 |
| No log | 18.0 | 126 | 0.2526 | 0.85 | 0.3112 | 1.0132 | 0.85 | 0.8324 | 0.2757 | 0.0517 |
| No log | 19.0 | 133 | 0.2423 | 0.855 | 0.3023 | 1.0623 | 0.855 | 0.8382 | 0.2727 | 0.0561 |
| No log | 20.0 | 140 | 0.2294 | 0.83 | 0.3112 | 1.1134 | 0.83 | 0.8139 | 0.2697 | 0.0703 |
| No log | 21.0 | 147 | 0.2380 | 0.835 | 0.3080 | 0.9961 | 0.835 | 0.8190 | 0.2841 | 0.0489 |
| No log | 22.0 | 154 | 0.2362 | 0.84 | 0.3034 | 0.9586 | 0.8400 | 0.8145 | 0.2626 | 0.0520 |
| No log | 23.0 | 161 | 0.2252 | 0.86 | 0.2946 | 1.1006 | 0.8600 | 0.8471 | 0.2830 | 0.0495 |
| No log | 24.0 | 168 | 0.2325 | 0.85 | 0.2985 | 0.9069 | 0.85 | 0.8288 | 0.2681 | 0.0533 |
| No log | 25.0 | 175 | 0.2335 | 0.825 | 0.3005 | 0.8930 | 0.825 | 0.8000 | 0.2640 | 0.0496 |
| No log | 26.0 | 182 | 0.2309 | 0.845 | 0.2984 | 1.0007 | 0.845 | 0.8308 | 0.2573 | 0.0536 |
| No log | 27.0 | 189 | 0.2265 | 0.835 | 0.3051 | 1.0092 | 0.835 | 0.8158 | 0.2626 | 0.0603 |
| No log | 28.0 | 196 | 0.2192 | 0.83 | 0.2977 | 1.0186 | 0.83 | 0.8019 | 0.2516 | 0.0572 |
| No log | 29.0 | 203 | 0.2276 | 0.83 | 0.3017 | 0.9407 | 0.83 | 0.8179 | 0.2553 | 0.0480 |
| No log | 30.0 | 210 | 0.2131 | 0.84 | 0.2992 | 0.9232 | 0.8400 | 0.8195 | 0.2541 | 0.0546 |
| No log | 31.0 | 217 | 0.2197 | 0.845 | 0.2998 | 0.9012 | 0.845 | 0.8301 | 0.2537 | 0.0569 |
| No log | 32.0 | 224 | 0.2138 | 0.85 | 0.2972 | 0.9117 | 0.85 | 0.8349 | 0.2777 | 0.0551 |
| No log | 33.0 | 231 | 0.2167 | 0.85 | 0.2969 | 1.0176 | 0.85 | 0.8390 | 0.2676 | 0.0535 |
| No log | 34.0 | 238 | 0.2114 | 0.84 | 0.2959 | 0.8912 | 0.8400 | 0.8190 | 0.2512 | 0.0514 |
| No log | 35.0 | 245 | 0.2145 | 0.845 | 0.2952 | 0.8960 | 0.845 | 0.8216 | 0.2638 | 0.0492 |
| No log | 36.0 | 252 | 0.2146 | 0.845 | 0.2960 | 0.9093 | 0.845 | 0.8301 | 0.2841 | 0.0519 |
| No log | 37.0 | 259 | 0.2157 | 0.845 | 0.2973 | 0.9043 | 0.845 | 0.8216 | 0.2614 | 0.0520 |
| No log | 38.0 | 266 | 0.2116 | 0.84 | 0.2949 | 0.8871 | 0.8400 | 0.8190 | 0.2639 | 0.0512 |
| No log | 39.0 | 273 | 0.2138 | 0.845 | 0.2963 | 0.9002 | 0.845 | 0.8301 | 0.2497 | 0.0512 |
| No log | 40.0 | 280 | 0.2129 | 0.84 | 0.2960 | 0.9731 | 0.8400 | 0.8190 | 0.2500 | 0.0511 |
| No log | 41.0 | 287 | 0.2139 | 0.845 | 0.2966 | 1.0111 | 0.845 | 0.8301 | 0.2750 | 0.0523 |
| No log | 42.0 | 294 | 0.2134 | 0.84 | 0.2959 | 0.9515 | 0.8400 | 0.8190 | 0.2577 | 0.0506 |
| No log | 43.0 | 301 | 0.2134 | 0.84 | 0.2972 | 0.9022 | 0.8400 | 0.8190 | 0.2538 | 0.0517 |
| No log | 44.0 | 308 | 0.2131 | 0.84 | 0.2966 | 0.9569 | 0.8400 | 0.8190 | 0.2683 | 0.0519 |
| No log | 45.0 | 315 | 0.2131 | 0.84 | 0.2965 | 0.8931 | 0.8400 | 0.8190 | 0.2504 | 0.0513 |
| No log | 46.0 | 322 | 0.2119 | 0.84 | 0.2963 | 0.8998 | 0.8400 | 0.8190 | 0.2535 | 0.0513 |
| No log | 47.0 | 329 | 0.2129 | 0.84 | 0.2973 | 0.9017 | 0.8400 | 0.8190 | 0.2527 | 0.0514 |
| No log | 48.0 | 336 | 0.2130 | 0.84 | 0.2971 | 0.8947 | 0.8400 | 0.8190 | 0.2520 | 0.0510 |
| No log | 49.0 | 343 | 0.2123 | 0.84 | 0.2972 | 0.9482 | 0.8400 | 0.8190 | 0.2583 | 0.0515 |
| No log | 50.0 | 350 | 0.2124 | 0.84 | 0.2970 | 0.9083 | 0.8400 | 0.8190 | 0.2604 | 0.0513 |
| No log | 51.0 | 357 | 0.2130 | 0.84 | 0.2974 | 0.8978 | 0.8400 | 0.8190 | 0.2446 | 0.0513 |
| No log | 52.0 | 364 | 0.2127 | 0.84 | 0.2975 | 0.8932 | 0.8400 | 0.8190 | 0.2457 | 0.0513 |
| No log | 53.0 | 371 | 0.2125 | 0.84 | 0.2972 | 0.8935 | 0.8400 | 0.8190 | 0.2508 | 0.0512 |
| No log | 54.0 | 378 | 0.2130 | 0.84 | 0.2975 | 0.8989 | 0.8400 | 0.8190 | 0.2551 | 0.0513 |
| No log | 55.0 | 385 | 0.2128 | 0.84 | 0.2972 | 0.8941 | 0.8400 | 0.8190 | 0.2448 | 0.0511 |
| No log | 56.0 | 392 | 0.2128 | 0.84 | 0.2974 | 0.8944 | 0.8400 | 0.8190 | 0.2459 | 0.0515 |
| No log | 57.0 | 399 | 0.2128 | 0.84 | 0.2973 | 0.8934 | 0.8400 | 0.8190 | 0.2517 | 0.0512 |
| No log | 58.0 | 406 | 0.2130 | 0.84 | 0.2973 | 0.8936 | 0.8400 | 0.8190 | 0.2448 | 0.0513 |
| No log | 59.0 | 413 | 0.2129 | 0.84 | 0.2973 | 0.8951 | 0.8400 | 0.8190 | 0.2383 | 0.0513 |
| No log | 60.0 | 420 | 0.2128 | 0.84 | 0.2972 | 0.8921 | 0.8400 | 0.8190 | 0.2519 | 0.0512 |
| No log | 61.0 | 427 | 0.2125 | 0.84 | 0.2974 | 0.8959 | 0.8400 | 0.8190 | 0.2518 | 0.0515 |
| No log | 62.0 | 434 | 0.2128 | 0.84 | 0.2973 | 0.8937 | 0.8400 | 0.8190 | 0.2385 | 0.0513 |
| No log | 63.0 | 441 | 0.2131 | 0.84 | 0.2974 | 0.8933 | 0.8400 | 0.8190 | 0.2551 | 0.0512 |
| No log | 64.0 | 448 | 0.2129 | 0.84 | 0.2974 | 0.8930 | 0.8400 | 0.8190 | 0.2388 | 0.0512 |
| No log | 65.0 | 455 | 0.2129 | 0.84 | 0.2973 | 0.8927 | 0.8400 | 0.8190 | 0.2447 | 0.0513 |
| No log | 66.0 | 462 | 0.2129 | 0.84 | 0.2974 | 0.8930 | 0.8400 | 0.8190 | 0.2385 | 0.0513 |
| No log | 67.0 | 469 | 0.2129 | 0.84 | 0.2974 | 0.8929 | 0.8400 | 0.8190 | 0.2458 | 0.0512 |
| No log | 68.0 | 476 | 0.2130 | 0.84 | 0.2975 | 0.8930 | 0.8400 | 0.8190 | 0.2455 | 0.0512 |
| No log | 69.0 | 483 | 0.2130 | 0.84 | 0.2973 | 0.8917 | 0.8400 | 0.8190 | 0.2459 | 0.0513 |
| No log | 70.0 | 490 | 0.2129 | 0.84 | 0.2973 | 0.8913 | 0.8400 | 0.8190 | 0.2520 | 0.0513 |
| No log | 71.0 | 497 | 0.2131 | 0.84 | 0.2974 | 0.8919 | 0.8400 | 0.8190 | 0.2519 | 0.0513 |
| 0.1234 | 72.0 | 504 | 0.2130 | 0.84 | 0.2973 | 0.8917 | 0.8400 | 0.8190 | 0.2457 | 0.0511 |
| 0.1234 | 73.0 | 511 | 0.2129 | 0.84 | 0.2974 | 0.8917 | 0.8400 | 0.8190 | 0.2455 | 0.0512 |
| 0.1234 | 74.0 | 518 | 0.2129 | 0.84 | 0.2974 | 0.8913 | 0.8400 | 0.8190 | 0.2455 | 0.0512 |
| 0.1234 | 75.0 | 525 | 0.2130 | 0.84 | 0.2973 | 0.8917 | 0.8400 | 0.8190 | 0.2519 | 0.0513 |
| 0.1234 | 76.0 | 532 | 0.2129 | 0.84 | 0.2974 | 0.8921 | 0.8400 | 0.8190 | 0.2455 | 0.0512 |
| 0.1234 | 77.0 | 539 | 0.2130 | 0.84 | 0.2973 | 0.8919 | 0.8400 | 0.8190 | 0.2455 | 0.0511 |
| 0.1234 | 78.0 | 546 | 0.2130 | 0.84 | 0.2973 | 0.8924 | 0.8400 | 0.8190 | 0.2455 | 0.0511 |
| 0.1234 | 79.0 | 553 | 0.2130 | 0.84 | 0.2974 | 0.8919 | 0.8400 | 0.8190 | 0.2456 | 0.0512 |
| 0.1234 | 80.0 | 560 | 0.2130 | 0.84 | 0.2973 | 0.8915 | 0.8400 | 0.8190 | 0.2515 | 0.0512 |
| 0.1234 | 81.0 | 567 | 0.2130 | 0.84 | 0.2973 | 0.8915 | 0.8400 | 0.8190 | 0.2456 | 0.0511 |
| 0.1234 | 82.0 | 574 | 0.2130 | 0.84 | 0.2974 | 0.8915 | 0.8400 | 0.8190 | 0.2456 | 0.0512 |
| 0.1234 | 83.0 | 581 | 0.2130 | 0.84 | 0.2973 | 0.8916 | 0.8400 | 0.8190 | 0.2516 | 0.0512 |
| 0.1234 | 84.0 | 588 | 0.2130 | 0.84 | 0.2974 | 0.8920 | 0.8400 | 0.8190 | 0.2456 | 0.0512 |
| 0.1234 | 85.0 | 595 | 0.2130 | 0.84 | 0.2974 | 0.8915 | 0.8400 | 0.8190 | 0.2456 | 0.0512 |
| 0.1234 | 86.0 | 602 | 0.2130 | 0.84 | 0.2974 | 0.8917 | 0.8400 | 0.8190 | 0.2456 | 0.0512 |
| 0.1234 | 87.0 | 609 | 0.2130 | 0.84 | 0.2974 | 0.8913 | 0.8400 | 0.8190 | 0.2517 | 0.0512 |
| 0.1234 | 88.0 | 616 | 0.2130 | 0.84 | 0.2973 | 0.8916 | 0.8400 | 0.8190 | 0.2456 | 0.0512 |
| 0.1234 | 89.0 | 623 | 0.2130 | 0.84 | 0.2974 | 0.8912 | 0.8400 | 0.8190 | 0.2456 | 0.0512 |
| 0.1234 | 90.0 | 630 | 0.2130 | 0.84 | 0.2973 | 0.8914 | 0.8400 | 0.8190 | 0.2517 | 0.0512 |
| 0.1234 | 91.0 | 637 | 0.2131 | 0.84 | 0.2974 | 0.8915 | 0.8400 | 0.8190 | 0.2456 | 0.0512 |
| 0.1234 | 92.0 | 644 | 0.2130 | 0.84 | 0.2973 | 0.8912 | 0.8400 | 0.8190 | 0.2456 | 0.0512 |
| 0.1234 | 93.0 | 651 | 0.2130 | 0.84 | 0.2974 | 0.8915 | 0.8400 | 0.8190 | 0.2456 | 0.0512 |
| 0.1234 | 94.0 | 658 | 0.2130 | 0.84 | 0.2973 | 0.8913 | 0.8400 | 0.8190 | 0.2456 | 0.0512 |
| 0.1234 | 95.0 | 665 | 0.2130 | 0.84 | 0.2973 | 0.8913 | 0.8400 | 0.8190 | 0.2456 | 0.0512 |
| 0.1234 | 96.0 | 672 | 0.2131 | 0.84 | 0.2974 | 0.8915 | 0.8400 | 0.8190 | 0.2456 | 0.0512 |
| 0.1234 | 97.0 | 679 | 0.2131 | 0.84 | 0.2973 | 0.8914 | 0.8400 | 0.8190 | 0.2517 | 0.0512 |
| 0.1234 | 98.0 | 686 | 0.2130 | 0.84 | 0.2974 | 0.8912 | 0.8400 | 0.8190 | 0.2456 | 0.0512 |
| 0.1234 | 99.0 | 693 | 0.2131 | 0.84 | 0.2974 | 0.8913 | 0.8400 | 0.8190 | 0.2456 | 0.0512 |
| 0.1234 | 100.0 | 700 | 0.2131 | 0.84 | 0.2974 | 0.8913 | 0.8400 | 0.8190 | 0.2456 | 0.0512 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
yhyhy3/open_llama_7b_v2_med_instruct
|
yhyhy3
| 2023-07-10T16:22:39Z | 1,461 | 8 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"medical",
"code",
"en",
"dataset:ehartford/dolphin",
"dataset:LinhDuong/chatdoctor-200k",
"dataset:sahil2801/code_instructions_120k",
"dataset:medalpaca/medical_meadow_mediqa",
"dataset:kaiokendev/SuperCOT-dataset",
"dataset:tiiuae/falcon-refinedweb",
"dataset:bigcode/starcoderdata",
"dataset:togethercomputer/RedPajama-Data-1T",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-09T17:19:43Z |
---
license: apache-2.0
datasets:
- ehartford/dolphin
- LinhDuong/chatdoctor-200k
- sahil2801/code_instructions_120k
- medalpaca/medical_meadow_mediqa
- kaiokendev/SuperCOT-dataset
- tiiuae/falcon-refinedweb
- bigcode/starcoderdata
- togethercomputer/RedPajama-Data-1T
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- medical
- code
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model is an instruction-tuned Open LLaMa model with 7B parameters, with specialities in medical QA and code instruction.
## Model Details
<!-- Provide a longer summary of what this model is. -->
- **Model type:** LlamaForCausalLM
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model (QLoRA):** [openlm-research/open_llama_7b_v2](https://huggingface.co/openlm-research/open_llama_7b_v2)
## How to Get Started with the Model
Use the code below to get started with the model.
```py
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
model_path = 'yhyhy3/open_llama_7b_v2_med_dolphin_qlora_merged'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = '''### Instruction: Answer the following question.
### Input: What is the capital of New Jersey?
### Response:'''
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
```
## Training Details
### Training Data
Converted the following datasets to alpaca:instruction format.
1. [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin)
- ORCA style dataset generously created by [Eric Hartford](https://huggingface.co/ehartford)
- Only used the 1 million GPT4 generated instructions file [flan1m-alpaca-uncensored.jsonl](https://huggingface.co/datasets/ehartford/dolphin/blob/main/flan1m-alpaca-uncensored.jsonl).
2. [LinhDuong/chatdoctor-200k](https://huggingface.co/datasets/LinhDuong/chatdoctor-200k)
- Refined dataset sourced from icliniq medical QA forum
3. [sahil2801/code_instructions_120k](https://huggingface.co/datasets/sahil2801/code_instructions_120k)
- Code instruction dataset generously created by Sahil Chaudhary from ThreeSixty AI
4. [medalpaca/medical_meadow_mediqa](https://huggingface.co/datasets/medalpaca/medical_meadow_mediqa)
- MEDIQA is a dataset of manually generated, question-driven summaries of multi and single document answers to consumer health questions from medalpaca group.
5. [kaiokendev/SuperCOT-dataset](https://huggingface.co/datasets/kaiokendev/SuperCOT-dataset)
- Code instruction dataset generously created by Kaio Ken
### Training Procedure
Trained using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) QLoRa on [RunPod](https://www.runpod.io/console/gpu-cloud) 8x A6000 on Community Cloud for 3 epochs (~14 hours - ~$70).
<details>
<summary>axolotl training config:</summary>
```yaml
base_model: openlm-research/open_llama_7b_v2
base_model_config: openlm-research/open_llama_7b_v2
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
push_dataset_to_hub:
hub_model_id:
hf_use_auth_token:
datasets:
- path: json
type: alpaca
data_files: /disk/flan1m-alpaca-uncensored.jsonl
shards: 8
- path: sahil2801/code_instructions_120k
type: alpaca
- path: LinhDuong/chatdoctor-200k
type: alpaca
shards: 2
- path: kaiokendev/SuperCOT-dataset
type: alpaca
- path: medalpaca/medical_meadow_mediqa
type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.01
adapter: qlora
lora_model_dir:
sequence_len: 2048
max_packed_sequence_len: 2048
lora_r: 8
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_mode: true
wandb_project:
wandb_watch:
wandb_run_id:
wandb_log_model: 'openllama_checkpoint'
output_dir: /disk/open_llama_7b_v2_dolphin_qlora
gradient_accumulation_steps: 2
micro_batch_size: 16
num_epochs: 3
optimizer: paged_adamw_32bit
torchdistx_path:
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention: true
flash_attention:
gptq_groupsize:
gptq_model_v1:
warmup_steps: 1000
eval_steps: 5000
save_steps:
debug:
deepspeed:
weight_decay: 0.0000001
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details>
|
himanshubohraxxx/innovaccer
|
himanshubohraxxx
| 2023-07-10T16:12:43Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-10T16:12:43Z |
---
license: creativeml-openrail-m
---
|
uw-madison/mra-base-4096-8-d3
|
uw-madison
| 2023-07-10T16:12:42Z | 495 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mra",
"fill-mask",
"arxiv:2207.10284",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-23T06:36:15Z |
# MRA
MRA model for masked language modeling (MLM) for sequence length 512.
## About MRA
The MRA model was proposed in [Multi Resolution Analysis (MRA) for Approximate Self-Attention](https://arxiv.org/abs/2207.10284) by Zhanpeng Zeng, Sourav Pal, Jeffery Kline, Glenn M Fung, and Vikas Singh.
The abstract from the paper is the following:
*Transformers have emerged as a preferred model for many tasks in natural langugage processing and vision. Recent efforts on training and deploying Transformers more efficiently have identified many strategies to approximate the self-attention matrix, a key module in a Transformer architecture. Effective ideas include various prespecified sparsity patterns, low-rank basis expansions and combinations thereof. In this paper, we revisit classical Multiresolution Analysis (MRA) concepts such as Wavelets, whose potential value in this setting remains underexplored thus far. We show that simple approximations based on empirical feedback and design choices informed by modern hardware and implementation challenges, eventually yield a MRA-based approach for self-attention with an excellent performance profile across most criteria of interest. We undertake an extensive set of experiments and demonstrate that this multi-resolution scheme outperforms most efficient self-attention proposals and is favorable for both short and long sequences. Code is available at https://github.com/mlpen/mra-attention.*
This model was contributed by [novice03](https://huggingface.co/novice03).
The original code can be found [here](https://github.com/mlpen/mra-attention).
|
uw-madison/mra-base-512-4
|
uw-madison
| 2023-07-10T16:11:54Z | 1,482 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mra",
"fill-mask",
"arxiv:2207.10284",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-23T06:04:05Z |
# MRA
MRA model for masked language modeling (MLM) for sequence length 512.
## About MRA
The MRA model was proposed in [Multi Resolution Analysis (MRA) for Approximate Self-Attention](https://arxiv.org/abs/2207.10284) by Zhanpeng Zeng, Sourav Pal, Jeffery Kline, Glenn M Fung, and Vikas Singh.
The abstract from the paper is the following:
*Transformers have emerged as a preferred model for many tasks in natural langugage processing and vision. Recent efforts on training and deploying Transformers more efficiently have identified many strategies to approximate the self-attention matrix, a key module in a Transformer architecture. Effective ideas include various prespecified sparsity patterns, low-rank basis expansions and combinations thereof. In this paper, we revisit classical Multiresolution Analysis (MRA) concepts such as Wavelets, whose potential value in this setting remains underexplored thus far. We show that simple approximations based on empirical feedback and design choices informed by modern hardware and implementation challenges, eventually yield a MRA-based approach for self-attention with an excellent performance profile across most criteria of interest. We undertake an extensive set of experiments and demonstrate that this multi-resolution scheme outperforms most efficient self-attention proposals and is favorable for both short and long sequences. Code is available at https://github.com/mlpen/mra-attention.*
This model was contributed by [novice03](https://huggingface.co/novice03).
The original code can be found [here](https://github.com/mlpen/mra-attention).
|
mgmeskill/Pixelcopter-PLE-v0
|
mgmeskill
| 2023-07-10T15:38:32Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T15:26:11Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 42.50 +/- 37.13
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
tyavika/LR1E4-BS16-Bert_CNN512LSTM256NoBid
|
tyavika
| 2023-07-10T15:31:42Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-09T20:06:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: LR1E4-BS16-Bert_CNN512LSTM256NoBid
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LR1E4-BS16-Bert_CNN512LSTM256NoBid
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.7267 | 1.0 | 3290 | 1.5092 |
| 1.2394 | 2.0 | 6580 | 1.3933 |
| 0.8348 | 3.0 | 9870 | 1.5591 |
| 0.542 | 4.0 | 13160 | 1.6667 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MnLgt/textual_inversion_muir_1_5
|
MnLgt
| 2023-07-10T15:31:36Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-10T14:16:45Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - jordandavis/textual_inversion_muir_1_5
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
grace-pro/afriberta-finetuned-hausa
|
grace-pro
| 2023-07-10T15:26:48Z | 126 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T14:49:51Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: afriberta-finetuned-hausa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta-finetuned-hausa
This model is a fine-tuned version of [castorini/afriberta_large](https://huggingface.co/castorini/afriberta_large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1242
- Precision: 0.7104
- Recall: 0.5095
- F1: 0.5934
- Accuracy: 0.9647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1369 | 1.0 | 2624 | 0.1256 | 0.6856 | 0.4541 | 0.5463 | 0.9614 |
| 0.1103 | 2.0 | 5248 | 0.1195 | 0.7014 | 0.4947 | 0.5802 | 0.9637 |
| 0.0868 | 3.0 | 7872 | 0.1242 | 0.7104 | 0.5095 | 0.5934 | 0.9647 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Birchlabs/llama-13b-stepwise-embeddings
|
Birchlabs
| 2023-07-10T15:17:11Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-07-10T13:55:53Z |
---
license: apache-2.0
---
Fine-tuned input (`embed_tokens: Embedding`) and output (`lm_head: Linear`) embeddings layers, for use with [`Birchlabs/llama-13b-stepwise-adapter`](https://huggingface.co/Birchlabs/llama-13b-stepwise-adapter).
Prior to finetuning: we grew the vocabulary of the tokenizer and embeddings layers. The new embeddings were average-initialized, and needed training, so we trained them. These are the weights from that training.
Ordinarily a QLoRA finetune of an LLM would not finetune the `embed_tokens: Embedding` (you'd need to get a bit creative, because not only have the dimensions changed, but also I don't believe any way has been established to train _adapters_ over `Embedding`s).
Nor apparently would it finetune `lm_head: Linear`. This is harder than it sounds (i.e. you can't handle it the same way you adapt the other Linear layers), because the dimensions have grown.
|
S1X3L4/Taxi-v3
|
S1X3L4
| 2023-07-10T15:04:55Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T15:04:50Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="S1X3L4/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dariowsz/wav2vec2-base-finetuned-gtzan
|
dariowsz
| 2023-07-10T15:03:27Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-04T13:47:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-gtzan
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5537
- Accuracy: 0.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7898 | 1.0 | 113 | 1.8052 | 0.45 |
| 1.4297 | 2.0 | 226 | 1.2229 | 0.62 |
| 1.041 | 3.0 | 339 | 0.9934 | 0.65 |
| 1.3882 | 4.0 | 452 | 1.1735 | 0.62 |
| 0.7248 | 5.0 | 565 | 0.8461 | 0.69 |
| 0.6128 | 6.0 | 678 | 0.7391 | 0.75 |
| 0.3225 | 7.0 | 791 | 0.8754 | 0.74 |
| 0.6483 | 8.0 | 904 | 0.8341 | 0.79 |
| 0.2755 | 9.0 | 1017 | 0.5537 | 0.88 |
| 0.4398 | 10.0 | 1130 | 0.6076 | 0.85 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/gpt2-cocnat-mod-datasets-txt-processing
|
NasimB
| 2023-07-10T15:01:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T12:29:02Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-cocnat-mod-datasets-txt-processing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-cocnat-mod-datasets-txt-processing
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6848 | 0.3 | 500 | 5.6500 |
| 5.3379 | 0.59 | 1000 | 5.2204 |
| 4.9909 | 0.89 | 1500 | 4.9703 |
| 4.7146 | 1.19 | 2000 | 4.8200 |
| 4.5695 | 1.49 | 2500 | 4.7076 |
| 4.4685 | 1.78 | 3000 | 4.5985 |
| 4.3237 | 2.08 | 3500 | 4.5311 |
| 4.1614 | 2.38 | 4000 | 4.4731 |
| 4.1267 | 2.68 | 4500 | 4.4151 |
| 4.082 | 2.97 | 5000 | 4.3593 |
| 3.8448 | 3.27 | 5500 | 4.3575 |
| 3.8261 | 3.57 | 6000 | 4.3240 |
| 3.8089 | 3.86 | 6500 | 4.2887 |
| 3.6462 | 4.16 | 7000 | 4.2921 |
| 3.5453 | 4.46 | 7500 | 4.2840 |
| 3.529 | 4.76 | 8000 | 4.2688 |
| 3.4926 | 5.05 | 8500 | 4.2683 |
| 3.3463 | 5.35 | 9000 | 4.2715 |
| 3.3453 | 5.65 | 9500 | 4.2702 |
| 3.3408 | 5.95 | 10000 | 4.2694 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
mleli/my-awesome-model
|
mleli
| 2023-07-10T15:01:15Z | 182 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-10T14:45:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
model-index:
- name: my-awesome-model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: offensive
split: validation
args: offensive
metrics:
- name: Accuracy
type: accuracy
value: 0.777190332326284
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-awesome-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6838
- Accuracy: 0.7772
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0915 | 1.0 | 1490 | 1.3955 | 0.7689 |
| 0.0638 | 2.0 | 2980 | 1.5816 | 0.7621 |
| 0.024 | 3.0 | 4470 | 1.6838 | 0.7772 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
rickareo/distilbert-base-uncased-finetuned-emotion
|
rickareo
| 2023-07-10T14:59:25Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-10T14:44:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9229910973969778
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2155
- Accuracy: 0.923
- F1: 0.9230
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8271 | 1.0 | 250 | 0.3166 | 0.903 | 0.8989 |
| 0.2469 | 2.0 | 500 | 0.2155 | 0.923 | 0.9230 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
pmpc/de_pipeline
|
pmpc
| 2023-07-10T14:53:50Z | 1 | 0 |
spacy
|
[
"spacy",
"token-classification",
"de",
"model-index",
"region:us"
] |
token-classification
| 2023-07-10T10:51:54Z |
---
tags:
- spacy
- token-classification
language:
- de
model-index:
- name: de_pipeline
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9573497322
- name: NER Recall
type: recall
value: 0.9567803331
- name: NER F Score
type: f_score
value: 0.9570649479
---
| Feature | Description |
| --- | --- |
| **Name** | `de_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.5.3,<3.6.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (19 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `AN`, `EUN`, `GRT`, `GS`, `INN`, `LD`, `LDS`, `LIT`, `MRK`, `ORG`, `PER`, `RR`, `RS`, `ST`, `STR`, `UN`, `VO`, `VS`, `VT` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 95.71 |
| `ENTS_P` | 95.73 |
| `ENTS_R` | 95.68 |
| `TRANSFORMER_LOSS` | 11836.63 |
| `NER_LOSS` | 8009.96 |
|
firecoral/ppo-LunarLander-v2
|
firecoral
| 2023-07-10T14:49:32Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T14:49:10Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.63 +/- 20.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Nyme/textual_inversion_cat
|
Nyme
| 2023-07-10T14:49:16Z | 5 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-10T09:17:48Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - Nyme/textual_inversion_cat
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
Birchlabs/llama-13b-stepwise-adapter
|
Birchlabs
| 2023-07-10T14:37:32Z | 0 | 1 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-07-10T13:43:18Z |
---
license: apache-2.0
---
Finetunes Llama-13b+Alaca to solve problems via stepwise reasoning (OpenAI [PRM800k dataset](https://github.com/openai/prm800k), or rather our postprocessed version, [`Birchlabs/openai-prm800k-solutions-only`](https://huggingface.co/datasets/Birchlabs/openai-prm800k-solutions-only)).
## Model description
This is a fork of [`llama-13b`](https://huggingface.co/huggyllama/llama-13b) + [`chansung/alpaca-lora-13b`](https://huggingface.co/chansung/alpaca-lora-13b).
That is: we loaded Llama-13b, we applied Alpaca LoRA, expanded vocabulary, then QLoRA 4-bit finetuned from there.
Parts:
- base model [`llama-13b`](https://huggingface.co/huggyllama/llama-13b)
- LoRA 0 [`chansung/alpaca-lora-13b`](https://huggingface.co/chansung/alpaca-lora-13b)
- LoRA 1 [`Birchlabs/llama-13b-stepwise-adapter`](https://huggingface.co/Birchlabs/llama-13b-stepwise-adapter) (this)
- tokenizer [`Birchlabs/llama-13b-stepwise-tokenizer`](https://huggingface.co/Birchlabs/llama-13b-stepwise-tokenizer)
- finetuned input/output embedding layers: [`Birchlabs/llama-13b-stepwise-embeddings`](https://huggingface.co/Birchlabs/llama-13b-stepwise-embeddings)
## Training
Trained using [`qlora.py`](https://github.com/scottlogic-alex/qlora/blob/stepwise/qlora.py) from our [`stepwise`](https://github.com/scottlogic-alex/qlora/tree/stepwise) branch of [qlora](https://github.com/artidoro/qlora).
Known-good as of commit [`522d86b`](https://github.com/scottlogic-alex/qlora/blob/522d86b447d9fe85e99ece33141fb37c4e947cda/qlora.py).
`python -m qlora --model_name_or_path huggyllama/llama-13b --lora_name_or_path chansung/alpaca-lora-13b --dataset prm800k-solutions --dataset_format prm800k-solutions --bf16 --max_memory_MB 24000 --use_bos_token_in_prompt --truncate_toward_center --source_max_len 184 --target_max_len 998 --gradient_accumulation_steps 4 --per_device_train_batch_size 4 --per_device_eval_batch_size 4 --learning_rate 0.0002 --run_name 13b_alpaca_special_tokens_long --report_to wandb --save_steps 64 --save_total_limit 3 --max_steps 1664 --evaluation_strategy steps --eval_steps 64 --generate_steps 16 --register_process_supervision_tokens`
- [(Private) W&B run](https://wandb.ai/scottlogic/llm-stepwise/runs/nvdyo6aw?workspace=user-birchlabs)
- [(Public) W&B report](https://api.wandb.ai/links/scottlogic/65wo5d2o)
## Usage
You can load using [`evaluate.py`](https://github.com/scottlogic-alex/qlora/blob/stepwise/evaluate.py#L209-L278) from our [`stepwise`](https://github.com/scottlogic-alex/qlora/tree/stepwise) branch of [qlora](https://github.com/artidoro/qlora).
Known-good as of commit [`522d86b`](https://github.com/scottlogic-alex/qlora/blob/522d86b447d9fe85e99ece33141fb37c4e947cda/evaluate.py).
Download `embed_tokens.pt` and `lm_head.pt` from [`Birchlabs/llama-13b-stepwise-embeddings`](https://huggingface.co/Birchlabs/llama-13b-stepwise-embeddings/tree/main), then run evaluator like so:
```bash
python -m evaluate \
--model_name_or_path huggyllama/llama-13b \
--base_lora_model_name_or_path chansung/alpaca-lora-13b \
--tokenizer_model_name_or_path Birchlabs/llama-13b-stepwise-tokenizer \
--lora_model_name_or_path Birchlabs/llama-13b-stepwise-adapter \
--input_embedding_path embed_tokens.pt \
--output_embedding_path lm_head.pt \
--bf16 \
--use_bos_token_in_prompt \
--overrun_countermeasures False
```
## Prompt style
The model is finetuned for instruction-response.
`prompt = '<s>Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\nHow do I feed a family of 5 without ingredients?\n\n### Response:\n<|step_start|>'`
## Example output:
`$ How do I drink a matcha latte?`
`To drink a matcha latte, I should first remove the matcha from the bag. <|step_end|> <|step_start|> I can do that by opening the bag and shaking out the matcha into a glass or bowl. <|step_end|> <|step_start|> Then, I should add some hot water into the glass or bowl and start stirring the matcha. <|step_end|> <|step_start|> This is to make the matcha easier to mix with the water. <|step_end|> <|step_start|> Next, I should add milk into the glass or bowl and continue stirring. <|step_end|> <|step_start|> This is to make the matcha and the water form a thick mixture. <|step_end|> <|step_start|> Finally, I should add ice cubes into the glass or bowl and stir again. <|step_end|> <|step_start|> This is to make the mixture even thicker and chilled. <|step_end|> <|step_start|> So, a matcha latte is a drink that is a mixture of matcha, water, and milk, with some ice cubes. <|step_end|> <|step_start|> <|answer_start|> To make a matcha latte, I should remove the matcha from the bag, add some hot water into a glass or bowl and stir, add milk into the glass or bowl and stir as well, and finally add ice cubes and stir again. <|step_end|> <|step_start|> <|answer_start|> To drink a matcha latte, I should remove`
`$ What is the third Fibonacci number?`
`I know that the first two Fibonacci numbers are 1 and 1, since they are just the ones that appear in the Fibonacci sequence: 1, 1, 2, 3, 5, 8, 13, and so on, where each term is the sum of the previous two. <|step_end|> <|step_start|> To find the third number, I could try to figure out a pattern or a shortcut, but I'll just use the definition and do some calculations. <|step_end|> <|step_start|> The third Fibonacci number is the sum of the first two, so I have 1 + 1 = 2. <|step_end|> <|step_start|> <|answer_start|> 2 <|answer_end|>`
`$ How do I make a cup of tea?`
`Ok, the first step is to open the cup of tea. <|step_end|> <|step_start|> Then the second step is to jump in the tea. <|step_end|> <|step_start|> And finally the third step is to drink the jump. <|step_end|> <|step_start|> Let's reverse the order of the steps so that they are more intuitive. <|step_end|> <|step_start|> The first step is to jump in the tea. <|step_end|> <|step_start|> Then the second step is to open the cup of tea. <|step_end|> <|step_start|> And finally the third step is to drink the jump. <|step_end|> <|step_start|> So, I think this is a good way to make a cup of tea. <|step_end|> <|step_start|> Yes, it is. <|answer_start|> How do I make a cup of tea? <|answer_end|>`
`$ How do I open a jar?`
`To open a jar, I need to use a key that fits into the lock on the jar. <|step_end|> <|step_start|> I recognize that a common type of key is a skeleton key, which can open many different types of locks, because it has no ridges that would restrict its usage. <|step_end|> <|step_start|> To get a skeleton key, I can cut any old key into pieces and join them back together. <|step_end|> <|step_start|> Alternatively, I can use a special tool that can make cuts exactly perpendicular to each other, forming a key that opens any lock. <|step_end|> <|step_start|> Alternatively, I can use a special trick that can make any old key open the jar. <|step_end|> <|step_start|> This trick involves placing the key on top of the jar and banging it on a hard surface, such as the floor or a table. <|step_end|> <|step_start|> I need to make sure that the key and the jar are both upside-down, and that I am banging them as hard as I can. <|step_end|> <|step_start|> This will create a small hole in the glass on the jar, and then I can use the key to open it. <|step_end|> <|step_start|> I need to be careful not to get cut by the glass, and I need to check if the jar is empty first, otherwise I might poke my finger and`
|
lizhuang144/flan-t5-base-factual-sg
|
lizhuang144
| 2023-07-10T14:34:47Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-06T11:13:16Z |
See details at 'https://github.com/zhuang-li/FACTUAL/tree/main'
|
marsh5/Reinforce-cartpole
|
marsh5
| 2023-07-10T14:31:44Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T14:31:34Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Tasaloris13/falcon-7b-test
|
Tasaloris13
| 2023-07-10T14:31:30Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-10T14:31:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
medanis13/chatbot
|
medanis13
| 2023-07-10T14:25:42Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-10T14:22:19Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
JennnDexter/textual_inversion
|
JennnDexter
| 2023-07-10T14:24:31Z | 29 | 0 |
diffusers
|
[
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-07T11:57:47Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - JennnDexter/textual_inversion
These are textual inversion adaption weights for CompVis/stable-diffusion-v1-4. You can find some example images in the following.
|
jordyvl/vit-_tobacco3482_kd_MSE_test_pretrain_student
|
jordyvl
| 2023-07-10T14:09:40Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T14:07:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: vit-_tobacco3482_kd_MSE_test_pretrain_student
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-_tobacco3482_kd_MSE_test_pretrain_student
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 0.8077 | 0.4 | 0.7439 | 5.4442 | 0.4000 | 0.2755 | 0.2844 | 0.3738 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.12.0
- Tokenizers 0.12.1
|
sannne990/meinahentai
|
sannne990
| 2023-07-10T14:08:40Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-10T13:56:40Z |
---
license: creativeml-openrail-m
---
|
LmloCin/TEST_MODEL
|
LmloCin
| 2023-07-10T14:07:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-10T13:34:58Z |
import socket,warnings
try:
socket.setdefaulttimeout(1)
socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect(('1.1.1.1', 53))
except socket.error as ex: raise Exception("STOP: No internet. Click '>|' in top right and set 'Internet' switch to on")
import os
iskaggle = os.environ.get('KAGGLE_KERNEL_RUN_TYPE', '')
if iskaggle:
!pip install -Uqq fastai
# Skip this cell if you already have duckduckgo_search installed
!pip install -Uqq duckduckgo_search
from duckduckgo_search import ddg_images
from fastcore.all import *
def search_images(term, max_images=200): return L(ddg_images(term, max_results=max_images)).itemgot('image')
urls = search_images('duck images', max_images=1)
urls[0]
from fastdownload import download_url
dest = 'duck.jpg'
download_url(urls[0], dest, show_progress=False)
from fastai.vision.all import *
im = Image.open(dest)
im.to_thumb(256,256)
download_url(search_images('lakes photos', max_images=1)[0], 'lakes.jpg', show_progress=False)
Image.open('lakes.jpg').to_thumb(256,256)
searches = 'lakes','duck'
path = Path('duck_or_not')
from time import sleep
for o in searches:
dest = (path/o)
dest.mkdir(exist_ok=True, parents=True)
download_images(dest, urls=search_images(f'{o} photo'))
sleep(2) # Pause between searches to avoid over-loading server
download_images(dest, urls=search_images(f'{o} sun photo'))
sleep(2)
download_images(dest, urls=search_images(f'{o} shade photo'))
sleep(2)
resize_images(path/o, max_size=400, dest=path/o)
failed = verify_images(get_image_files(path))
failed.map(Path.unlink)
len(failed)
dls = DataBlock(
blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(valid_pct=0.2, seed=42),
get_y=parent_label,
item_tfms=[Resize(192, method='squish')]
).dataloaders(path)
dls.show_batch(max_n=6)
learn = vision_learner(dls, resnet18, metrics=error_rate)
learn.fine_tune(3)
is_duck,_,probs = learn.predict(PILImage.create('duck.jpg'))
print(f"This is a: {is_duck}.")
print(f"Probability it's a duck: {probs[0]:.4f}")
|
kfkas/LawBot-v1_koalpaca_legalQA_easylaw_cro
|
kfkas
| 2023-07-10T14:06:22Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-10T14:06:16Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
iammartian0/speecht5_finetuned_voxpopuli_it
|
iammartian0
| 2023-07-10T14:03:39Z | 81 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:voxpopuli/it",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-07-10T11:00:58Z |
---
license: mit
tags:
- generated_from_trainer
- text-to-speech
datasets:
- voxpopuli/it
model-index:
- name: speecht5_finetuned_voxpopuli_it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_it
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli/it dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5467 | 10.58 | 1000 | 0.5003 |
| 0.5182 | 21.16 | 2000 | 0.4882 |
| 0.5046 | 31.75 | 3000 | 0.4857 |
| 0.5013 | 42.33 | 4000 | 0.4855 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
paumena/BioDistilBERT-SQUAD
|
paumena
| 2023-07-10T14:01:11Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-10T12:16:38Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: paumena/BioDistilBERT-SQUAD
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# paumena/BioDistilBERT-SQUAD
This model is a fine-tuned version of [nlpie/bio-distilbert-cased](https://huggingface.co/nlpie/bio-distilbert-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4856
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 27725, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.5466 | 0 |
| 0.9820 | 1 |
| 0.7453 | 2 |
| 0.5859 | 3 |
| 0.4856 | 4 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dilip-reddy/ppo-LunarLander
|
dilip-reddy
| 2023-07-10T13:57:53Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T13:57:33Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.69 +/- 17.74
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JoaoReis/Neuronet
|
JoaoReis
| 2023-07-10T13:45:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-10T13:29:55Z |
import socket,warnings
try:
socket.setdefaulttimeout(1)
socket.socket(socket.AF_INET, socket.SOCK_STREAM).connect(('1.1.1.1', 53))
except socket.error as ex: raise Exception("STOP: No internet. Click '>|' in top right and set 'Internet' switch to on")
import os
iskaggle = os.environ.get('KAGGLE_KERNEL_RUN_TYPE', '')
if iskaggle:
!pip install -Uqq fastai
!pip install -Uqq duckduckgo_search
from duckduckgo_search import ddg_images
from fastcore.all import *
def search_images(term, max_images=200): return L(ddg_images(term, max_results=max_images)).itemgot('image')
urls = search_images(' star fox photos', max_images=1)
urls[0]
from fastdownload import download_url
dest = 'starfox.jpg'
download_url(urls[0], dest, show_progress=False)
from fastai.vision.all import *
im = Image.open(dest)
im.to_thumb(256,256)
download_url(search_images('eva 01', max_images=1)[0], 'forest.jpg', show_progress=False)
Image.open('forest.jpg').to_thumb(256,256)
searches = 'eva 01','star fox'
path = Path('eva 01_or_not')
from time import sleep
for o in searches:
dest = (path/o)
dest.mkdir(exist_ok=True, parents=True)
download_images(dest, urls=search_images(f'{o} photo'))
sleep(10) # Pause between searches to avoid over-loading server
download_images(dest, urls=search_images(f'{o} sun photo'))
sleep(10)
download_images(dest, urls=search_images(f'{o} shade photo'))
sleep(10)
resize_images(path/o, max_size=400, dest=path/o)
failed = verify_images(get_image_files(path))
failed.map(Path.unlink)
len(failed)
dls = DataBlock(
blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(valid_pct=0.2, seed=42),
get_y=parent_label,
item_tfms=[Resize(192, method='squish')]
).dataloaders(path)
dls.show_batch(max_n=6)
learn = vision_learner(dls, resnet18, metrics=error_rate)
learn.fine_tune(3)
is_bird,_,probs = learn.predict(PILImage.create('bird.jpg'))
print(f"This is a: {is_bird}.")
print(f"Probability it's a bird: {probs[0]:.4f}")
|
WALIDALI/bekiamzrev
|
WALIDALI
| 2023-07-10T13:39:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-10T13:33:42Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### bekiamzrev Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
boostcamp-5th-nlp07/qlora-koalpaca-polyglot-5.8b-fast
|
boostcamp-5th-nlp07
| 2023-07-10T13:29:43Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-10T13:29:38Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
Winmodel/q-FrozenLake-v1-4x4-noSlippery
|
Winmodel
| 2023-07-10T13:29:38Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T13:29:37Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Winmodel/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AladarMezga/detr-resnet-50_finetuned_cppe5
|
AladarMezga
| 2023-07-10T13:26:52Z | 192 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cppe-5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-07-10T12:06:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cppe-5
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
1aurent/q-Taxi-v3
|
1aurent
| 2023-07-10T13:25:59Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T13:02:36Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="1aurent/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Winmodel/dqn-SpaceInvadersNoFrameskip-v4
|
Winmodel
| 2023-07-10T13:18:04Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T13:17:26Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 554.00 +/- 269.84
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Winmodel -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Winmodel -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Winmodel
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
TheBloke/Chronoboros-33B-GGML
|
TheBloke
| 2023-07-10T13:16:31Z | 0 | 11 | null |
[
"license:other",
"region:us"
] | null | 2023-07-10T08:29:30Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Henk717's Chronoboros 33B GGML
These files are GGML format model files for [Henk717's Chronoboros 33B](https://huggingface.co/Henk717/chronoboros-33B).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with GPU acceleration via the c_transformers backend.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI. Supports full GPU accel on macOS. Also supports Windows, without GPU accel.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Requires extra steps to enable GPU accel via llama.cpp backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Chronoboros-33B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Chronoboros-33B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Henk717/chronoboros-33B)
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: {prompt}
### Response:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| chronoboros-33b.ggmlv3.q2_K.bin | q2_K | 2 | 13.71 GB| 16.21 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| chronoboros-33b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 14.06 GB| 16.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| chronoboros-33b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.72 GB| 18.22 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| chronoboros-33b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.28 GB| 19.78 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| chronoboros-33b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.36 GB| 20.86 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| chronoboros-33b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.62 GB| 22.12 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| chronoboros-33b.ggmlv3.q4_0.bin | q4_0 | 4 | 18.30 GB| 20.80 GB | Original quant method, 4-bit. |
| chronoboros-33b.ggmlv3.q4_1.bin | q4_1 | 4 | 20.33 GB| 22.83 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| chronoboros-33b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.40 GB| 24.90 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| chronoboros-33b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.05 GB| 25.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| chronoboros-33b.ggmlv3.q5_0.bin | q5_0 | 5 | 22.37 GB| 24.87 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| chronoboros-33b.ggmlv3.q5_1.bin | q5_1 | 5 | 24.40 GB| 26.90 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| chronoboros-33b.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB| 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| chronoboros-33b.ggmlv3.q8_0.bin | q8_0 | 8 | 34.56 GB| 37.06 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m chronoboros-33b.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Henk717's Chronoboros 33B
This model was the result of a 50/50 average weight merge between Airoboros-33B-1.4 and Chronos-33B.
License is inhereted from all merged models, which includes the LLaMA license requiring you to own a license to use the LLaMA models.
If you have such a license grant from Facebook you can request access to this model.
|
jordyvl/vit-tiny_tobacco3482_kd_MSE_test_pretrain_student
|
jordyvl
| 2023-07-10T13:01:47Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T12:59:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: vit-tiny_tobacco3482_kd_MSE_test_pretrain_student
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-tiny_tobacco3482_kd_MSE_test_pretrain_student
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 0.6243 | 0.595 | 0.6456 | 1.9017 | 0.595 | 0.5113 | 0.3512 | 0.2202 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.12.0
- Tokenizers 0.12.1
|
captain-awesome/naveed-ggml-model-gpt4all-falcon-q4_0
|
captain-awesome
| 2023-07-10T12:56:08Z | 0 | 4 | null |
[
"summarization",
"en",
"license:apache-2.0",
"region:us"
] |
summarization
| 2023-07-10T12:49:00Z |
---
license: apache-2.0
language:
- en
pipeline_tag: summarization
---
# Meeting Notes Generator
## Intended uses
Used to generate meeting notes based on meeting trascript and starting prompts.
#
```python
# Example of usage
from transformers import pipeline
summ = pipeline("summarization", "captain-awesome/naveed-ggml-model-gpt4all-falcon-q4_0")
print(summ(text))
```
## Training data
Initialized with pre-trained weights of "gpt2" checkpoint. Fine-tuned the model on stories of various genres.
|
mgmeskill/CartPole-v33
|
mgmeskill
| 2023-07-10T12:39:27Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T12:39:20Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v33
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 30.60 +/- 24.13
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
agercas/speecht5_finetuned_voxpopuli_lt
|
agercas
| 2023-07-10T12:38:13Z | 87 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-07-10T09:40:14Z |
---
license: mit
tags:
- generated_from_trainer
- text-to-speech
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_lt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_lt
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5034
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4877 | 103.9 | 1000 | 0.4923 |
| 0.458 | 207.79 | 2000 | 0.5039 |
| 0.4439 | 311.69 | 3000 | 0.4976 |
| 0.4407 | 415.58 | 4000 | 0.5034 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-aod-oversampling
|
hafidikhsan
| 2023-07-10T12:26:43Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-10T12:25:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-aod-oversampling
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-english-pronunciation-evaluation-aod-oversampling
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1381
- Accuracy: 0.7536
- F1: 0.7512
- Precision: 0.7510
- Recall: 0.7536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.8559 | 1.0 | 313 | 0.8926 | 0.5948 | 0.5697 | 0.5782 | 0.5948 |
| 0.6647 | 2.0 | 626 | 0.8132 | 0.6528 | 0.6435 | 0.6478 | 0.6528 |
| 0.5562 | 3.0 | 939 | 0.7991 | 0.72 | 0.7197 | 0.7209 | 0.72 |
| 0.2166 | 4.0 | 1252 | 0.9808 | 0.7528 | 0.7515 | 0.7514 | 0.7528 |
| 0.0269 | 5.0 | 1565 | 1.1381 | 0.7536 | 0.7512 | 0.7510 | 0.7536 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Nianhua123/ppo-LunarLander-v2
|
Nianhua123
| 2023-07-10T12:24:32Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T12:24:15Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.69 +/- 15.15
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
PraveenJesu/openai-whisper-medium-peft-lora-v2.2.4
|
PraveenJesu
| 2023-07-10T12:24:27Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-10T12:24:25Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
161381373-qq/ee
|
161381373-qq
| 2023-07-10T12:04:47Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-07-10T12:04:14Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jordyvl/dit-small_rvl_cdip_100_examples_per_class_kd_MSE_lr_fix
|
jordyvl
| 2023-07-10T12:04:08Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T11:10:29Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-small_rvl_cdip_100_examples_per_class_kd_MSE_lr_fix
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-small_rvl_cdip_100_examples_per_class_kd_MSE_lr_fix
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8796
- Accuracy: 0.26
- Brier Loss: 0.8768
- Nll: 6.0962
- F1 Micro: 0.26
- F1 Macro: 0.2480
- Ece: 0.2002
- Aurc: 0.5815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 1.5365 | 0.065 | 0.9398 | 10.2864 | 0.065 | 0.0116 | 0.1183 | 0.9536 |
| No log | 2.0 | 14 | 1.5332 | 0.06 | 0.9374 | 9.8468 | 0.06 | 0.0269 | 0.1067 | 0.9096 |
| No log | 3.0 | 21 | 1.5119 | 0.085 | 0.9352 | 9.1495 | 0.085 | 0.0355 | 0.1135 | 0.8759 |
| No log | 4.0 | 28 | 1.5040 | 0.0825 | 0.9333 | 8.6549 | 0.0825 | 0.0439 | 0.1181 | 0.8618 |
| No log | 5.0 | 35 | 1.5021 | 0.1 | 0.9301 | 8.9643 | 0.1000 | 0.0558 | 0.1318 | 0.8030 |
| No log | 6.0 | 42 | 1.4885 | 0.1 | 0.9276 | 7.8684 | 0.1000 | 0.0505 | 0.1205 | 0.8190 |
| No log | 7.0 | 49 | 1.4882 | 0.0975 | 0.9254 | 9.4095 | 0.0975 | 0.0584 | 0.1220 | 0.7847 |
| No log | 8.0 | 56 | 1.4909 | 0.1275 | 0.9227 | 9.4274 | 0.1275 | 0.0827 | 0.1335 | 0.7445 |
| No log | 9.0 | 63 | 1.4837 | 0.115 | 0.9217 | 10.2918 | 0.115 | 0.0546 | 0.1366 | 0.7932 |
| No log | 10.0 | 70 | 1.4857 | 0.1125 | 0.9186 | 9.5039 | 0.1125 | 0.0510 | 0.1277 | 0.7749 |
| No log | 11.0 | 77 | 1.4804 | 0.1125 | 0.9183 | 8.5178 | 0.1125 | 0.0515 | 0.1315 | 0.7831 |
| No log | 12.0 | 84 | 1.4701 | 0.11 | 0.9177 | 8.2398 | 0.11 | 0.0655 | 0.1310 | 0.7754 |
| No log | 13.0 | 91 | 1.4721 | 0.16 | 0.9160 | 7.2379 | 0.16 | 0.1155 | 0.1462 | 0.7370 |
| No log | 14.0 | 98 | 1.4717 | 0.11 | 0.9159 | 8.1355 | 0.11 | 0.0633 | 0.1221 | 0.7579 |
| No log | 15.0 | 105 | 1.4739 | 0.1325 | 0.9138 | 7.4037 | 0.1325 | 0.0790 | 0.1419 | 0.7358 |
| No log | 16.0 | 112 | 1.4657 | 0.1425 | 0.9135 | 7.8063 | 0.1425 | 0.0821 | 0.1285 | 0.7269 |
| No log | 17.0 | 119 | 1.4632 | 0.1375 | 0.9112 | 7.8852 | 0.1375 | 0.0948 | 0.1389 | 0.7342 |
| No log | 18.0 | 126 | 1.4769 | 0.15 | 0.9081 | 8.5375 | 0.15 | 0.0894 | 0.1399 | 0.7113 |
| No log | 19.0 | 133 | 1.4547 | 0.1775 | 0.9045 | 6.4114 | 0.1775 | 0.1174 | 0.1507 | 0.7007 |
| No log | 20.0 | 140 | 1.4470 | 0.1725 | 0.9031 | 8.1696 | 0.1725 | 0.1246 | 0.1464 | 0.7079 |
| No log | 21.0 | 147 | 1.4615 | 0.19 | 0.9021 | 6.0696 | 0.19 | 0.1390 | 0.1646 | 0.7023 |
| No log | 22.0 | 154 | 1.4588 | 0.2 | 0.8996 | 6.0038 | 0.2000 | 0.1384 | 0.1628 | 0.6821 |
| No log | 23.0 | 161 | 1.4646 | 0.1525 | 0.8988 | 7.0678 | 0.1525 | 0.1075 | 0.1458 | 0.7000 |
| No log | 24.0 | 168 | 1.4491 | 0.2125 | 0.8933 | 5.9276 | 0.2125 | 0.1503 | 0.1533 | 0.6457 |
| No log | 25.0 | 175 | 1.4526 | 0.205 | 0.8916 | 7.6108 | 0.205 | 0.1479 | 0.1603 | 0.6676 |
| No log | 26.0 | 182 | 1.4510 | 0.17 | 0.8910 | 5.6337 | 0.17 | 0.1333 | 0.1396 | 0.6868 |
| No log | 27.0 | 189 | 1.4567 | 0.19 | 0.8850 | 5.2038 | 0.19 | 0.1380 | 0.1637 | 0.6547 |
| No log | 28.0 | 196 | 1.4570 | 0.2225 | 0.8846 | 6.5368 | 0.2225 | 0.1840 | 0.1701 | 0.6554 |
| No log | 29.0 | 203 | 1.4701 | 0.2075 | 0.8820 | 5.0057 | 0.2075 | 0.1663 | 0.1719 | 0.6598 |
| No log | 30.0 | 210 | 1.4693 | 0.2225 | 0.8755 | 7.4456 | 0.2225 | 0.1729 | 0.1626 | 0.6355 |
| No log | 31.0 | 217 | 1.4670 | 0.23 | 0.8787 | 5.8938 | 0.23 | 0.1904 | 0.1717 | 0.6424 |
| No log | 32.0 | 224 | 1.4540 | 0.2275 | 0.8756 | 6.6513 | 0.2275 | 0.1673 | 0.1676 | 0.6306 |
| No log | 33.0 | 231 | 1.4641 | 0.2275 | 0.8649 | 5.5689 | 0.2275 | 0.1751 | 0.1746 | 0.6138 |
| No log | 34.0 | 238 | 1.4710 | 0.2425 | 0.8640 | 7.0556 | 0.2425 | 0.1957 | 0.1809 | 0.6048 |
| No log | 35.0 | 245 | 1.4685 | 0.23 | 0.8632 | 5.5735 | 0.23 | 0.1940 | 0.1609 | 0.6188 |
| No log | 36.0 | 252 | 1.4665 | 0.2375 | 0.8592 | 5.8835 | 0.2375 | 0.1952 | 0.1727 | 0.6050 |
| No log | 37.0 | 259 | 1.4668 | 0.235 | 0.8540 | 5.3502 | 0.235 | 0.1966 | 0.1746 | 0.6056 |
| No log | 38.0 | 266 | 1.4855 | 0.27 | 0.8510 | 5.3781 | 0.27 | 0.2124 | 0.1692 | 0.5825 |
| No log | 39.0 | 273 | 1.5279 | 0.265 | 0.8562 | 6.2426 | 0.265 | 0.2126 | 0.1772 | 0.5831 |
| No log | 40.0 | 280 | 1.5433 | 0.2425 | 0.8551 | 5.9574 | 0.2425 | 0.1867 | 0.1499 | 0.5874 |
| No log | 41.0 | 287 | 1.5955 | 0.2525 | 0.8597 | 6.1628 | 0.2525 | 0.2024 | 0.1479 | 0.5891 |
| No log | 42.0 | 294 | 1.5528 | 0.2475 | 0.8541 | 6.3624 | 0.2475 | 0.1908 | 0.1566 | 0.5735 |
| No log | 43.0 | 301 | 1.5858 | 0.2675 | 0.8504 | 6.1261 | 0.2675 | 0.2174 | 0.1706 | 0.5674 |
| No log | 44.0 | 308 | 1.6013 | 0.2725 | 0.8496 | 5.8409 | 0.2725 | 0.2463 | 0.1846 | 0.5807 |
| No log | 45.0 | 315 | 1.5632 | 0.2625 | 0.8472 | 5.9669 | 0.2625 | 0.2307 | 0.1689 | 0.5689 |
| No log | 46.0 | 322 | 1.6520 | 0.2675 | 0.8509 | 5.8544 | 0.2675 | 0.2325 | 0.1779 | 0.5622 |
| No log | 47.0 | 329 | 1.6135 | 0.2625 | 0.8476 | 5.5208 | 0.2625 | 0.2504 | 0.1565 | 0.5759 |
| No log | 48.0 | 336 | 1.6565 | 0.275 | 0.8466 | 5.9254 | 0.275 | 0.2527 | 0.2026 | 0.5616 |
| No log | 49.0 | 343 | 1.6807 | 0.2625 | 0.8531 | 6.1297 | 0.2625 | 0.2259 | 0.1813 | 0.5664 |
| No log | 50.0 | 350 | 1.7266 | 0.255 | 0.8560 | 6.0828 | 0.255 | 0.2315 | 0.1817 | 0.5735 |
| No log | 51.0 | 357 | 1.7038 | 0.2525 | 0.8579 | 5.6442 | 0.2525 | 0.2405 | 0.1861 | 0.5828 |
| No log | 52.0 | 364 | 1.7954 | 0.255 | 0.8583 | 5.7016 | 0.255 | 0.2227 | 0.1722 | 0.5725 |
| No log | 53.0 | 371 | 1.7567 | 0.275 | 0.8557 | 6.1586 | 0.275 | 0.2523 | 0.1577 | 0.5619 |
| No log | 54.0 | 378 | 1.7589 | 0.2525 | 0.8565 | 5.3969 | 0.2525 | 0.2325 | 0.1840 | 0.5661 |
| No log | 55.0 | 385 | 1.7778 | 0.265 | 0.8569 | 5.8559 | 0.265 | 0.2447 | 0.1835 | 0.5640 |
| No log | 56.0 | 392 | 1.8044 | 0.275 | 0.8592 | 5.9942 | 0.275 | 0.2517 | 0.1783 | 0.5627 |
| No log | 57.0 | 399 | 1.8327 | 0.2625 | 0.8628 | 6.0224 | 0.2625 | 0.2333 | 0.1801 | 0.5560 |
| No log | 58.0 | 406 | 1.8184 | 0.25 | 0.8609 | 6.0769 | 0.25 | 0.2333 | 0.1941 | 0.5718 |
| No log | 59.0 | 413 | 1.8318 | 0.2575 | 0.8639 | 5.9454 | 0.2575 | 0.2364 | 0.1965 | 0.5743 |
| No log | 60.0 | 420 | 1.8081 | 0.2525 | 0.8641 | 6.0119 | 0.2525 | 0.2380 | 0.1818 | 0.5755 |
| No log | 61.0 | 427 | 1.8405 | 0.2625 | 0.8775 | 6.2129 | 0.2625 | 0.2474 | 0.1767 | 0.5908 |
| No log | 62.0 | 434 | 1.9012 | 0.2625 | 0.8728 | 6.1015 | 0.2625 | 0.2373 | 0.1881 | 0.5716 |
| No log | 63.0 | 441 | 1.8500 | 0.26 | 0.8728 | 6.3885 | 0.26 | 0.2414 | 0.1933 | 0.5809 |
| No log | 64.0 | 448 | 1.8771 | 0.2675 | 0.8733 | 6.2730 | 0.2675 | 0.2553 | 0.2035 | 0.5800 |
| No log | 65.0 | 455 | 1.8744 | 0.2575 | 0.8677 | 5.9805 | 0.2575 | 0.2392 | 0.1918 | 0.5663 |
| No log | 66.0 | 462 | 1.8366 | 0.255 | 0.8694 | 6.0073 | 0.255 | 0.2403 | 0.2048 | 0.5807 |
| No log | 67.0 | 469 | 1.8758 | 0.2575 | 0.8743 | 6.1015 | 0.2575 | 0.2381 | 0.2071 | 0.5825 |
| No log | 68.0 | 476 | 1.8796 | 0.2675 | 0.8711 | 5.9457 | 0.2675 | 0.2470 | 0.2100 | 0.5737 |
| No log | 69.0 | 483 | 1.8635 | 0.2675 | 0.8721 | 5.9312 | 0.2675 | 0.2493 | 0.1788 | 0.5751 |
| No log | 70.0 | 490 | 1.8801 | 0.2625 | 0.8710 | 5.9629 | 0.2625 | 0.2467 | 0.1974 | 0.5721 |
| No log | 71.0 | 497 | 1.8936 | 0.26 | 0.8791 | 6.0358 | 0.26 | 0.2481 | 0.1922 | 0.5844 |
| 0.9216 | 72.0 | 504 | 1.8736 | 0.275 | 0.8715 | 6.0493 | 0.275 | 0.2569 | 0.2099 | 0.5710 |
| 0.9216 | 73.0 | 511 | 1.8784 | 0.2525 | 0.8760 | 6.1441 | 0.2525 | 0.2401 | 0.1978 | 0.5849 |
| 0.9216 | 74.0 | 518 | 1.8843 | 0.2725 | 0.8763 | 6.1948 | 0.2725 | 0.2533 | 0.2007 | 0.5801 |
| 0.9216 | 75.0 | 525 | 1.8785 | 0.2675 | 0.8784 | 5.9868 | 0.2675 | 0.2578 | 0.1975 | 0.5851 |
| 0.9216 | 76.0 | 532 | 1.8812 | 0.275 | 0.8725 | 5.9367 | 0.275 | 0.2594 | 0.2037 | 0.5744 |
| 0.9216 | 77.0 | 539 | 1.8956 | 0.27 | 0.8746 | 5.9038 | 0.27 | 0.2541 | 0.1816 | 0.5738 |
| 0.9216 | 78.0 | 546 | 1.8897 | 0.265 | 0.8802 | 5.9763 | 0.265 | 0.2493 | 0.2098 | 0.5866 |
| 0.9216 | 79.0 | 553 | 1.8728 | 0.275 | 0.8752 | 6.0806 | 0.275 | 0.2623 | 0.1874 | 0.5794 |
| 0.9216 | 80.0 | 560 | 1.8887 | 0.2725 | 0.8759 | 6.2762 | 0.2725 | 0.2520 | 0.2005 | 0.5768 |
| 0.9216 | 81.0 | 567 | 1.8987 | 0.2725 | 0.8787 | 6.2444 | 0.2725 | 0.2587 | 0.2183 | 0.5773 |
| 0.9216 | 82.0 | 574 | 1.8759 | 0.2625 | 0.8773 | 6.1643 | 0.2625 | 0.2541 | 0.1922 | 0.5805 |
| 0.9216 | 83.0 | 581 | 1.8766 | 0.27 | 0.8748 | 6.0036 | 0.27 | 0.2554 | 0.1784 | 0.5762 |
| 0.9216 | 84.0 | 588 | 1.8809 | 0.2625 | 0.8764 | 6.0488 | 0.2625 | 0.2469 | 0.2030 | 0.5833 |
| 0.9216 | 85.0 | 595 | 1.8982 | 0.26 | 0.8775 | 6.0747 | 0.26 | 0.2453 | 0.1998 | 0.5851 |
| 0.9216 | 86.0 | 602 | 1.8912 | 0.27 | 0.8798 | 6.1894 | 0.27 | 0.2566 | 0.1938 | 0.5839 |
| 0.9216 | 87.0 | 609 | 1.8847 | 0.2775 | 0.8769 | 6.2744 | 0.2775 | 0.2643 | 0.2019 | 0.5775 |
| 0.9216 | 88.0 | 616 | 1.8734 | 0.265 | 0.8741 | 6.1928 | 0.265 | 0.2526 | 0.1763 | 0.5820 |
| 0.9216 | 89.0 | 623 | 1.8760 | 0.2725 | 0.8768 | 6.0274 | 0.2725 | 0.2620 | 0.2039 | 0.5792 |
| 0.9216 | 90.0 | 630 | 1.8860 | 0.265 | 0.8771 | 6.0912 | 0.265 | 0.2518 | 0.1924 | 0.5810 |
| 0.9216 | 91.0 | 637 | 1.8865 | 0.2625 | 0.8750 | 6.2350 | 0.2625 | 0.2476 | 0.1844 | 0.5791 |
| 0.9216 | 92.0 | 644 | 1.8815 | 0.2725 | 0.8733 | 6.0962 | 0.2725 | 0.2563 | 0.2013 | 0.5721 |
| 0.9216 | 93.0 | 651 | 1.8794 | 0.27 | 0.8756 | 6.2535 | 0.27 | 0.2562 | 0.2028 | 0.5764 |
| 0.9216 | 94.0 | 658 | 1.8835 | 0.2675 | 0.8769 | 6.2039 | 0.2675 | 0.2562 | 0.1928 | 0.5773 |
| 0.9216 | 95.0 | 665 | 1.8904 | 0.27 | 0.8786 | 6.1504 | 0.27 | 0.2543 | 0.2034 | 0.5768 |
| 0.9216 | 96.0 | 672 | 1.8911 | 0.26 | 0.8788 | 6.1527 | 0.26 | 0.2465 | 0.2025 | 0.5829 |
| 0.9216 | 97.0 | 679 | 1.8871 | 0.265 | 0.8776 | 6.0994 | 0.265 | 0.2519 | 0.2126 | 0.5794 |
| 0.9216 | 98.0 | 686 | 1.8825 | 0.265 | 0.8769 | 6.1564 | 0.265 | 0.2516 | 0.1987 | 0.5776 |
| 0.9216 | 99.0 | 693 | 1.8803 | 0.2675 | 0.8766 | 6.1183 | 0.2675 | 0.2561 | 0.2095 | 0.5798 |
| 0.9216 | 100.0 | 700 | 1.8796 | 0.26 | 0.8768 | 6.0962 | 0.26 | 0.2480 | 0.2002 | 0.5815 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
emailconverter/SysTools-Has-Recently-Launched-Business-Email
|
emailconverter
| 2023-07-10T11:45:27Z | 0 | 0 | null |
[
"bussiness email",
"en",
"region:us"
] | null | 2023-07-10T11:36:58Z |
---
language:
- en
tags:
- bussiness email
---
SysTools recently launched a business email solution.
The company committed to 99.99% uptime and also offered a free migration from current email services to SysTools business email.
It is affordable for all types of organizations and fully protected from data loss of any kind. It is also accessible via a computer or mobile phone.
This software also offers a smart calendar option for regular and easily accessible management of their tasks.
You can also configure one or more hosting servers with SysTools Mail.
For More information: https://www.prlog.org/12972368-systools-has-come-with-business-email-solution-for-msmes.html
|
Junr-syl/tweet_sentiments_analysis
|
Junr-syl
| 2023-07-10T11:21:39Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-07T09:19:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: tweet_sentiments_analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tweet_sentiments_analysis
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3953
- eval_accuracy: 0.8660
- eval_runtime: 254.1512
- eval_samples_per_second: 31.473
- eval_steps_per_second: 3.935
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
AquaV/Hoers13B-ggml-q4_0
|
AquaV
| 2023-07-10T11:17:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-10T10:06:01Z |
I believe this was trained on the dataset available at https://huggingface.co/datasets/Amo/FimFic_Omega_V3. The dataset comprises user-generated stories inspired by the "My Little Pony: Friendship is Magic" series.
I'm not the original creator of the model however based on the training data the format for the prompts might be:
```
<|startoftext|>
[tags: author: (author name), character: (character name), character: (character name), genre: (genre), series: (series name) warning: (content warnings)]
***
CHAPTER: (chapter name)
{Cursor here}
```
I am uncertain if "<|startoftext|>" should be included.
Here are two examples from the training data:
```
<|startoftext|>
[tags: author: device heretic, character: Other, character: Princess Celestia, character: Twilight Sparkle, genre: Sad, genre: Slice of Life, genre: Tragedy, series: My Little Pony: Friendship is Magic ]
***
CHAPTER: The Underlying Truth
{Cursor here}
```
```
<|startoftext|>
[tags: author: Bloodline Spike, character: Cutie Mark Crusaders, character: Main 6, character: Princess Celestia, character: Princess Luna, character: Spike, genre: Adventure, genre: Dark, genre: Romance, genre: Sad, genre: Tragedy, series: My Little Pony: Friendship is Magic, warning: Gore ]
***
CHAPTER: Chapter 1 Entering the Medallion
{Cursor here}
```
As I am just an archiver of this model, I may not be able to provide further support or solve issues you encounter while using it.
|
JFoz/test_nvs
|
JFoz
| 2023-07-10T11:06:54Z | 0 | 0 | null |
[
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-07-06T15:23:06Z |
---
license: cc-by-nc-4.0
---
## GenVS (partial) reimplementation
Model weights for a partial, somewhat unfaithful reimplementation of GeNVS https://nvlabs.github.io/genvs/media/genvs.pdf
Code repo at https://github.com/jfozard/nvs_test
### Dataset source
Model trained on ShapeNet car renderings from https://github.com/vsitzmann/scene-representation-networks
These are not for commercial use (ShapeNet license conditions).
### Example results
Conditioning image

Reconstructed views

|
UWB-AIR/Czert-A-base-uncased
|
UWB-AIR
| 2023-07-10T10:59:04Z | 94 | 3 |
transformers
|
[
"transformers",
"tf",
"albert",
"cs",
"arxiv:2103.13031",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- cs
---
# CZERT
This repository keeps Czert-A model for the paper [Czert – Czech BERT-like Model for Language Representation
](https://arxiv.org/abs/2103.13031)
For more information, see the paper
## Available Models
You can download **MLM & NSP only** pretrained models
~~[CZERT-A-v1](https://air.kiv.zcu.cz/public/CZERT-A-czert-albert-base-uncased.zip)
[CZERT-B-v1](https://air.kiv.zcu.cz/public/CZERT-B-czert-bert-base-cased.zip)~~
After some additional experiments, we found out that the tokenizers config was exported wrongly. In Czert-B-v1, the tokenizer parameter "do_lower_case" was wrongly set to true. In Czert-A-v1 the parameter "strip_accents" was incorrectly set to true.
Both mistakes are repaired in v2.
[CZERT-A-v2](https://air.kiv.zcu.cz/public/CZERT-A-v2-czert-albert-base-uncased.zip)
[CZERT-B-v2](https://air.kiv.zcu.cz/public/CZERT-B-v2-czert-bert-base-cased.zip)
or choose from one of **Finetuned Models**
| | Models |
| - | - |
| Sentiment Classification<br> (Facebook or CSFD) | [CZERT-A-sentiment-FB](https://air.kiv.zcu.cz/public/CZERT-A_fb.zip) <br> [CZERT-B-sentiment-FB](https://air.kiv.zcu.cz/public/CZERT-B_fb.zip) <br> [CZERT-A-sentiment-CSFD](https://air.kiv.zcu.cz/public/CZERT-A_csfd.zip) <br> [CZERT-B-sentiment-CSFD](https://air.kiv.zcu.cz/public/CZERT-B_csfd.zip) | Semantic Text Similarity <br> (Czech News Agency) | [CZERT-A-sts-CNA](https://air.kiv.zcu.cz/public/CZERT-A-sts-CNA.zip) <br> [CZERT-B-sts-CNA](https://air.kiv.zcu.cz/public/CZERT-B-sts-CNA.zip)
| Named Entity Recognition | [CZERT-A-ner-CNEC](https://air.kiv.zcu.cz/public/CZERT-A-ner-CNEC-cased.zip) <br> [CZERT-B-ner-CNEC](https://air.kiv.zcu.cz/public/CZERT-B-ner-CNEC-cased.zip) <br>[PAV-ner-CNEC](https://air.kiv.zcu.cz/public/PAV-ner-CNEC-cased.zip) <br> [CZERT-A-ner-BSNLP](https://air.kiv.zcu.cz/public/CZERT-A-ner-BSNLP-cased.zip)<br>[CZERT-B-ner-BSNLP](https://air.kiv.zcu.cz/public/CZERT-B-ner-BSNLP-cased.zip) <br>[PAV-ner-BSNLP](https://air.kiv.zcu.cz/public/PAV-ner-BSNLP-cased.zip) |
| Morphological Tagging<br> | [CZERT-A-morphtag-126k](https://air.kiv.zcu.cz/public/CZERT-A-morphtag-126k-cased.zip)<br>[CZERT-B-morphtag-126k](https://air.kiv.zcu.cz/public/CZERT-B-morphtag-126k-cased.zip) |
| Semantic Role Labelling |[CZERT-A-srl](https://air.kiv.zcu.cz/public/CZERT-A-srl-cased.zip)<br> [CZERT-B-srl](https://air.kiv.zcu.cz/public/CZERT-B-srl-cased.zip) |
## How to Use CZERT?
### Sentence Level Tasks
We evaluate our model on two sentence level tasks:
* Sentiment Classification,
* Semantic Text Similarity.
<!-- tokenizer = BertTokenizerFast.from_pretrained(CZERT_MODEL_PATH, strip_accents=False)
\tmodel = TFAlbertForSequenceClassification.from_pretrained(CZERT_MODEL_PATH, num_labels=1)
or
self.tokenizer = BertTokenizerFast.from_pretrained(CZERT_MODEL_PATH, strip_accents=False)
self.model_encoder = AutoModelForSequenceClassification.from_pretrained(CZERT_MODEL_PATH, from_tf=True)
-->
\t
### Document Level Tasks
We evaluate our model on one document level task
* Multi-label Document Classification.
### Token Level Tasks
We evaluate our model on three token level tasks:
* Named Entity Recognition,
* Morphological Tagging,
* Semantic Role Labelling.
## Downstream Tasks Fine-tuning Results
### Sentiment Classification
| | mBERT | SlavicBERT | ALBERT-r | Czert-A | Czert-B |
|:----:|:------------------------:|:------------------------:|:------------------------:|:-----------------------:|:--------------------------------:|
| FB | 71.72 ± 0.91 | 73.87 ± 0.50 | 59.50 ± 0.47 | 72.47 ± 0.72 | **76.55** ± **0.14** |
| CSFD | 82.80 ± 0.14 | 82.51 ± 0.14 | 75.40 ± 0.18 | 79.58 ± 0.46 | **84.79** ± **0.26** |
Average F1 results for the Sentiment Classification task. For more information, see [the paper](https://arxiv.org/abs/2103.13031).
### Semantic Text Similarity
| | **mBERT** | **Pavlov** | **Albert-random** | **Czert-A** | **Czert-B** |
|:-------------|:--------------:|:--------------:|:-----------------:|:--------------:|:----------------------:|
| STA-CNA | 83.335 ± 0.063 | 83.593 ± 0.050 | 43.184 ± 0.125 | 82.942 ± 0.106 | **84.345** ± **0.028** |
| STS-SVOB-img | 79.367 ± 0.486 | 79.900 ± 0.810 | 15.739 ± 2.992 | 79.444 ± 0.338 | **83.744** ± **0.395** |
| STS-SVOB-hl | 78.833 ± 0.296 | 76.996 ± 0.305 | 33.949 ± 1.807 | 75.089 ± 0.806 | **79.827 ± 0.469** |
Comparison of Pearson correlation achieved using pre-trained CZERT-A, CZERT-B, mBERT, Pavlov and randomly initialised Albert on semantic text similarity. For more information see [the paper](https://arxiv.org/abs/2103.13031).
### Multi-label Document Classification
| | mBERT | SlavicBERT | ALBERT-r | Czert-A | Czert-B |
|:-----:|:------------:|:------------:|:------------:|:------------:|:-------------------:|
| AUROC | 97.62 ± 0.08 | 97.80 ± 0.06 | 94.35 ± 0.13 | 97.49 ± 0.07 | **98.00** ± **0.04** |
| F1 | 83.04 ± 0.16 | 84.08 ± 0.14 | 72.44 ± 0.22 | 82.27 ± 0.17 | **85.06** ± **0.11** |
Comparison of F1 and AUROC score achieved using pre-trained CZERT-A, CZERT-B, mBERT, Pavlov and randomly initialised Albert on multi-label document classification. For more information see [the paper](https://arxiv.org/abs/2103.13031).
### Morphological Tagging
| | mBERT | Pavlov | Albert-random | Czert-A | Czert-B |
|:-----------------------|:---------------|:---------------|:---------------|:---------------|:---------------|
| Universal Dependencies | 99.176 ± 0.006 | 99.211 ± 0.008 | 96.590 ± 0.096 | 98.713 ± 0.008 | **99.300 ± 0.009** |
Comparison of F1 score achieved using pre-trained CZERT-A, CZERT-B, mBERT, Pavlov and randomly initialised Albert on morphological tagging task. For more information see [the paper](https://arxiv.org/abs/2103.13031).
### Semantic Role Labelling
<div id="tab:SRL">
| | mBERT | Pavlov | Albert-random | Czert-A | Czert-B | dep-based | gold-dep |
|:------:|:----------:|:----------:|:-------------:|:----------:|:----------:|:---------:|:--------:|
| span | 78.547 ± 0.110 | 79.333 ± 0.080 | 51.365 ± 0.423 | 72.254 ± 0.172 | **81.861 ± 0.102** | \\- | \\- |
| syntax | 90.226 ± 0.224 | 90.492 ± 0.040 | 80.747 ± 0.131 | 80.319 ± 0.054 | **91.462 ± 0.062** | 85.19 | 89.52 |
SRL results – dep columns are evaluate with labelled F1 from CoNLL 2009 evaluation script, other columns are evaluated with span F1 score same as it was used for NER evaluation. For more information see [the paper](https://arxiv.org/abs/2103.13031).
</div>
### Named Entity Recognition
| | mBERT | Pavlov | Albert-random | Czert-A | Czert-B |
|:-----------|:---------------|:---------------|:---------------|:---------------|:---------------|
| CNEC | **86.225 ± 0.208** | **86.565 ± 0.198** | 34.635 ± 0.343 | 72.945 ± 0.227 | 86.274 ± 0.116 |
| BSNLP 2019 | 84.006 ± 1.248 | **86.699 ± 0.370** | 19.773 ± 0.938 | 48.859 ± 0.605 | **86.729 ± 0.344** |
Comparison of f1 score achieved using pre-trained CZERT-A, CZERT-B, mBERT, Pavlov and randomly initialised Albert on named entity recognition task. For more information see [the paper](https://arxiv.org/abs/2103.13031).
## Licence
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/
## How should I cite CZERT?
For now, please cite [the Arxiv paper](https://arxiv.org/abs/2103.13031):
```
@article{sido2021czert,
title={Czert -- Czech BERT-like Model for Language Representation},
author={Jakub Sido and Ondřej Pražák and Pavel Přibáň and Jan Pašek and Michal Seják and Miloslav Konopík},
year={2021},
eprint={2103.13031},
archivePrefix={arXiv},
primaryClass={cs.CL},
journal={arXiv preprint arXiv:2103.13031},
}
```
|
km0228kr/xlm-roberta-base-finetuned-panx-de
|
km0228kr
| 2023-07-10T10:57:14Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T10:47:06Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8653353814644136
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1339
- F1: 0.8653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2583 | 1.0 | 525 | 0.1596 | 0.8231 |
| 0.1262 | 2.0 | 1050 | 0.1395 | 0.8468 |
| 0.0824 | 3.0 | 1575 | 0.1339 | 0.8653 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Tejdeep/videomae-base-finetuned-ssv2-finetuned-ucfcrime-ep10
|
Tejdeep
| 2023-07-10T10:52:27Z | 61 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-07-10T07:14:48Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
model-index:
- name: videomae-base-finetuned-ssv2-finetuned-ucfcrime-ep10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ssv2-finetuned-ucfcrime-ep10
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-ssv2](https://huggingface.co/MCG-NJU/videomae-base-finetuned-ssv2) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.9717
- eval_accuracy: 0.8004
- eval_runtime: 868.8117
- eval_samples_per_second: 3.177
- eval_steps_per_second: 1.588
- epoch: 0.1
- step: 140
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1400
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/gpt2-concat-cbt-mod-formatting-rarity-all-4k
|
NasimB
| 2023-07-10T10:37:11Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T08:41:40Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-cbt-mod-formatting-rarity-all-4k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-cbt-mod-formatting-rarity-all-4k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6908 | 0.29 | 500 | 5.6398 |
| 5.3277 | 0.58 | 1000 | 5.2063 |
| 4.9916 | 0.88 | 1500 | 4.9594 |
| 4.7094 | 1.17 | 2000 | 4.8109 |
| 4.5545 | 1.46 | 2500 | 4.6839 |
| 4.4417 | 1.75 | 3000 | 4.5738 |
| 4.3263 | 2.05 | 3500 | 4.5004 |
| 4.1291 | 2.34 | 4000 | 4.4536 |
| 4.1022 | 2.63 | 4500 | 4.3946 |
| 4.0499 | 2.92 | 5000 | 4.3446 |
| 3.859 | 3.22 | 5500 | 4.3360 |
| 3.7995 | 3.51 | 6000 | 4.3041 |
| 3.7749 | 3.8 | 6500 | 4.2712 |
| 3.6794 | 4.09 | 7000 | 4.2694 |
| 3.512 | 4.39 | 7500 | 4.2640 |
| 3.5086 | 4.68 | 8000 | 4.2500 |
| 3.4939 | 4.97 | 8500 | 4.2378 |
| 3.3321 | 5.26 | 9000 | 4.2512 |
| 3.3209 | 5.56 | 9500 | 4.2497 |
| 3.3179 | 5.85 | 10000 | 4.2489 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Vtuber-plan/ningyu-spring-15b-v1.0
|
Vtuber-plan
| 2023-07-10T10:32:01Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_bigcode",
"text-generation",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-13T17:12:48Z |
---
license: openrail
---
本仓库包含了ningyu模型spring系列的模型权重。
spring系列基于starcoderplus基座模型使用多语言对话语料数据进行SFT。
|
FelixChao/medical_faq_gpt_vicuna7b_chinese
|
FelixChao
| 2023-07-10T10:32:00Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-10T10:31:43Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
teppei727/bert_woco
|
teppei727
| 2023-07-10T10:30:30Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:1702.00992",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-12T05:46:20Z |
---
language:
- en
pipeline_tag: text-classification
---
# bert-woco
Finetuned BERT model for 13-class classification, without a discourse relation (Expansion.Conjunction). It was introduced in the paper: [Automatic Slide Generation Using Discourse Relations](https://link.springer.com/chapter/10.1007/978-3-031-36336-8_61) and first released in this repository. This model is uncased: it does not make a difference between english and English.
In our proposed method in this [paper](https://link.springer.com/chapter/10.1007/978-3-031-36336-8_61), we used this model for the classification of discourse relation between the SECOND and THIRD sentence and beyond in summarized sentences. The model is NOT used between the FIRST and SECOND sentences.
# Descliption
This model can classify the relation between the sentence pair of input.
Now we are working on preparing the Model card. Please wait for a few days.
The model trained from [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the dataset published in the paper: [Automatic Prediction of Discourse Connectives](https://arxiv.org/abs/1702.00992).
The dataset to make this model is based on English Wikipedia data and has 20 labels. However, this model will classify into 13 labels. This is because the 20-class data set was restructured to 14 classes to suit our research objective of "automatic slide generation. This distribution is shown below.
This model doesn't contain the discourse relation: Expansion.Conjunction. Because this discourse relation assumes that there is a relation between one previous sentence pair. So it is inappropriate to apply this discourse relation between the first and second sentences.
|
soBeauty/xlm-roberta-base-09072023-test_2
|
soBeauty
| 2023-07-10T10:22:22Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-10T09:30:15Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-09072023-test_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-09072023-test_2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
crisU8/bert-finetuned-ner-clinical-plncmm-large-25
|
crisU8
| 2023-07-10T10:18:28Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T09:52:07Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-plncmm-large-25
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-plncmm-large-25
This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2487
- Precision: 0.7372
- Recall: 0.8035
- F1: 0.7689
- Accuracy: 0.9270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 18
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 446 | 0.2607 | 0.6701 | 0.7772 | 0.7197 | 0.9113 |
| 0.6128 | 2.0 | 892 | 0.2298 | 0.7266 | 0.7964 | 0.7599 | 0.9254 |
| 0.1927 | 3.0 | 1338 | 0.2487 | 0.7372 | 0.8035 | 0.7689 | 0.9270 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jordyvl/dit-tiny_rvl_cdip_100_examples_per_class_kd_MSE_lr_fix
|
jordyvl
| 2023-07-10T10:05:07Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-10T09:48:49Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-tiny_rvl_cdip_100_examples_per_class_kd_MSE_lr_fix
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-tiny_rvl_cdip_100_examples_per_class_kd_MSE_lr_fix
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4358
- Accuracy: 0.195
- Brier Loss: 0.9035
- Nll: 12.0550
- F1 Micro: 0.195
- F1 Macro: 0.1471
- Ece: 0.1675
- Aurc: 0.6988
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 1.5167 | 0.07 | 0.9368 | 20.8948 | 0.07 | 0.0305 | 0.1106 | 0.8850 |
| No log | 2.0 | 50 | 1.5246 | 0.08 | 0.9362 | 21.4368 | 0.08 | 0.0346 | 0.1200 | 0.8659 |
| No log | 3.0 | 75 | 1.5053 | 0.1 | 0.9340 | 23.7241 | 0.1000 | 0.0522 | 0.1280 | 0.8087 |
| No log | 4.0 | 100 | 1.5097 | 0.0975 | 0.9322 | 17.3004 | 0.0975 | 0.0487 | 0.1220 | 0.8220 |
| No log | 5.0 | 125 | 1.4926 | 0.12 | 0.9296 | 16.3893 | 0.12 | 0.0600 | 0.1284 | 0.7752 |
| No log | 6.0 | 150 | 1.4838 | 0.105 | 0.9273 | 19.3692 | 0.1050 | 0.0356 | 0.1254 | 0.7955 |
| No log | 7.0 | 175 | 1.4729 | 0.0975 | 0.9229 | 18.6899 | 0.0975 | 0.0411 | 0.1134 | 0.7963 |
| No log | 8.0 | 200 | 1.4754 | 0.125 | 0.9196 | 17.7842 | 0.125 | 0.0676 | 0.1238 | 0.7778 |
| No log | 9.0 | 225 | 1.4725 | 0.1125 | 0.9193 | 16.6572 | 0.1125 | 0.0505 | 0.1254 | 0.7839 |
| No log | 10.0 | 250 | 1.4702 | 0.1175 | 0.9168 | 16.3975 | 0.1175 | 0.0556 | 0.1183 | 0.7638 |
| No log | 11.0 | 275 | 1.4648 | 0.1175 | 0.9169 | 18.4274 | 0.1175 | 0.0558 | 0.1219 | 0.7806 |
| No log | 12.0 | 300 | 1.4660 | 0.155 | 0.9166 | 15.6492 | 0.155 | 0.0791 | 0.1411 | 0.7512 |
| No log | 13.0 | 325 | 1.4684 | 0.16 | 0.9164 | 17.1698 | 0.16 | 0.1140 | 0.1519 | 0.7285 |
| No log | 14.0 | 350 | 1.4662 | 0.1175 | 0.9158 | 17.6999 | 0.1175 | 0.0501 | 0.1269 | 0.7637 |
| No log | 15.0 | 375 | 1.4602 | 0.1675 | 0.9143 | 13.2540 | 0.1675 | 0.1153 | 0.1515 | 0.7223 |
| No log | 16.0 | 400 | 1.4556 | 0.1325 | 0.9138 | 13.3868 | 0.1325 | 0.0881 | 0.1323 | 0.7558 |
| No log | 17.0 | 425 | 1.4527 | 0.175 | 0.9128 | 11.1983 | 0.175 | 0.1334 | 0.1596 | 0.7153 |
| No log | 18.0 | 450 | 1.4535 | 0.1625 | 0.9111 | 17.6046 | 0.1625 | 0.1021 | 0.1435 | 0.7379 |
| No log | 19.0 | 475 | 1.4453 | 0.1825 | 0.9086 | 11.8948 | 0.1825 | 0.1228 | 0.1594 | 0.7098 |
| 1.4614 | 20.0 | 500 | 1.4431 | 0.1525 | 0.9078 | 14.2631 | 0.1525 | 0.1115 | 0.1410 | 0.7293 |
| 1.4614 | 21.0 | 525 | 1.4392 | 0.1825 | 0.9063 | 10.7664 | 0.1825 | 0.1378 | 0.1567 | 0.7058 |
| 1.4614 | 22.0 | 550 | 1.4469 | 0.1775 | 0.9055 | 13.4724 | 0.1775 | 0.1212 | 0.1483 | 0.7107 |
| 1.4614 | 23.0 | 575 | 1.4356 | 0.17 | 0.9039 | 11.8141 | 0.17 | 0.1232 | 0.1515 | 0.7091 |
| 1.4614 | 24.0 | 600 | 1.4370 | 0.1875 | 0.9039 | 12.9338 | 0.1875 | 0.1384 | 0.1539 | 0.7017 |
| 1.4614 | 25.0 | 625 | 1.4358 | 0.195 | 0.9035 | 12.0550 | 0.195 | 0.1471 | 0.1675 | 0.6988 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.12.0
- Tokenizers 0.12.1
|
VitCon/ppo-Huggy
|
VitCon
| 2023-07-10T09:52:53Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-10T09:14:34Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: VitCon/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sarahflan/xlm-roberta-base-finetuned-panx-de
|
sarahflan
| 2023-07-10T09:49:53Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-07T14:27:30Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.863220155832338
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1352
- F1: 0.8632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2578 | 1.0 | 525 | 0.1642 | 0.8263 |
| 0.1289 | 2.0 | 1050 | 0.1397 | 0.8420 |
| 0.0819 | 3.0 | 1575 | 0.1352 | 0.8632 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
NasimB/gpt2-concat-cbt-mod-formatting-iorder
|
NasimB
| 2023-07-10T09:49:22Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T07:53:00Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-cbt-mod-formatting-iorder
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-cbt-mod-formatting-iorder
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6968 | 0.29 | 500 | 5.6513 |
| 5.3378 | 0.58 | 1000 | 5.2009 |
| 4.9872 | 0.87 | 1500 | 4.9494 |
| 4.7139 | 1.17 | 2000 | 4.8067 |
| 4.5498 | 1.46 | 2500 | 4.6856 |
| 4.4409 | 1.75 | 3000 | 4.5701 |
| 4.3205 | 2.04 | 3500 | 4.4941 |
| 4.1232 | 2.33 | 4000 | 4.4477 |
| 4.0973 | 2.62 | 4500 | 4.3983 |
| 4.0559 | 2.92 | 5000 | 4.3384 |
| 3.8563 | 3.21 | 5500 | 4.3338 |
| 3.7996 | 3.5 | 6000 | 4.3046 |
| 3.7773 | 3.79 | 6500 | 4.2716 |
| 3.6803 | 4.08 | 7000 | 4.2668 |
| 3.5068 | 4.37 | 7500 | 4.2615 |
| 3.5069 | 4.66 | 8000 | 4.2474 |
| 3.4888 | 4.96 | 8500 | 4.2326 |
| 3.3412 | 5.25 | 9000 | 4.2462 |
| 3.3188 | 5.54 | 9500 | 4.2448 |
| 3.3074 | 5.83 | 10000 | 4.2440 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
justinpinkney/falcon-7b
|
justinpinkney
| 2023-07-10T09:49:02Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"RefinedWebModel",
"text-generation",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2101.00027",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-07T14:25:17Z |
---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: false
license: apache-2.0
duplicated_from: tiiuae/falcon-7b
---
# 🚀 Falcon-7B
**Falcon-7B is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. It is made available under the Apache 2.0 license.**
*Paper coming soon* 😊.
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
## Why use Falcon-7B?
* **It outperforms comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
* **It is made available under a permissive Apache 2.0 license allowing for commercial use**, without any royalties or restrictions.
⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct).
🔥 **Looking for an even more powerful model?** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) is Falcon-7B's big brother!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B.
# Model Card for Falcon-7B
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English and French;
- **License:** Apache 2.0.
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-7B is trained on English and French data only, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-7B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-7B was trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. Significant components from our curated copora were inspired by The Pile ([Gao et al., 2020](https://arxiv.org/abs/2101.00027)).
| **Data source** | **Fraction** | **Tokens** | **Sources** |
|--------------------|--------------|------------|-----------------------------------|
| [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 79% | 1,185B | massive web crawl |
| Books | 7% | 110B | |
| Conversations | 6% | 85B | Reddit, StackOverflow, HackerNews |
| Code | 3% | 45B | |
| RefinedWeb-French | 3% | 45B | massive web crawl |
| Technical | 2% | 30B | arXiv, PubMed, USPTO, etc. |
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
### Training Procedure
Falcon-7B was trained on 384 A100 40GB GPUs, using a 2D parallelism strategy (PP=2, DP=192) combined with ZeRO.
#### Training Hyperparameters
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|------------|-------------------------------------------|
| Precision | `bfloat16` | |
| Optimizer | AdamW | |
| Learning rate | 6e-4 | 4B tokens warm-up, cosine decay to 1.2e-5 |
| Weight decay | 1e-1 | |
| Z-loss | 1e-4 | |
| Batch size | 2304 | 30B tokens ramp-up |
#### Speeds, Sizes, Times
Training happened in early March 2023 and took about two weeks.
## Evaluation
*Paper coming soon*.
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
## Technical Specifications
### Model Architecture and Objective
Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with a single layer norm.
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 32 | |
| `d_model` | 4544 | Increased to compensate for multiquery |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-7B was trained on AWS SageMaker, on 384 A100 40GB GPUs in P4d instances.
#### Software
Falcon-7B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
```
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
## License
Falcon-7B is made available under the Apache 2.0 license.
## Contact
falconllm@tii.ae
|
TheBloke/GodziLLa-30B-GGML
|
TheBloke
| 2023-07-10T09:38:45Z | 0 | 4 | null |
[
"merge",
"mix",
"cot",
"text-generation",
"license:other",
"region:us"
] |
text-generation
| 2023-07-09T11:53:15Z |
---
inference: false
license: other
pipeline_tag: text-generation
tags:
- merge
- mix
- cot
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Maya Philippine's GodziLLa 30B GGML
These files are GGML format model files for [Maya Philippine's GodziLLa 30B](https://huggingface.co/MayaPH/GodziLLa-30B).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Licensing
This model is GodziLLa-30B, a language model developed by Maya Philippines.
Maya Philippines' work is licensed under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.
For more information, visit: https://creativecommons.org/licenses/by-nc/4.0/
This model is based on Meta LLaMA weights, which are licensed under a bespoke research-only non-commercial license.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/GodziLLa-30B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/GodziLLa-30B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/MayaPH/GodziLLa-30B)
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: PROMPT
### Response:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| godzilla-30b.ggmlv3.q2_K.bin | q2_K | 2 | 13.71 GB| 16.21 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| godzilla-30b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 14.06 GB| 16.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| godzilla-30b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.72 GB| 18.22 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| godzilla-30b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.28 GB| 19.78 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| godzilla-30b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.36 GB| 20.86 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| godzilla-30b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.62 GB| 22.12 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| godzilla-30b.ggmlv3.q4_0.bin | q4_0 | 4 | 18.30 GB| 20.80 GB | Original quant method, 4-bit. |
| godzilla-30b.ggmlv3.q4_1.bin | q4_1 | 4 | 20.33 GB| 22.83 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| godzilla-30b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.40 GB| 24.90 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| godzilla-30b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.05 GB| 25.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| godzilla-30b.ggmlv3.q5_0.bin | q5_0 | 5 | 22.37 GB| 24.87 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| godzilla-30b.ggmlv3.q5_1.bin | q5_1 | 5 | 24.40 GB| 26.90 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| godzilla-30b.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB| 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| godzilla-30b.ggmlv3.q8_0.bin | q8_0 | 8 | 34.56 GB| 37.06 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m godzilla-30b.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Maya Philippine's GodziLLa 30B
<img src="https://drive.google.com/uc?export=view&id=16DzZwhqybQvT1wQVp-6qXHI9HhKft6CR" width="50%" alt="GodziLLa-30B">
Released July 9, 2023
## Model Description
GodziLLa-30B is an experimental combination of various proprietary Maya LoRAs with CalderaAI's [Lazarus-30B](https://huggingface.co/CalderaAI/30B-Lazarus). This composite model is not meant for any other use outside of research on competing LoRA adapter behavior. More specifically, since this is inherently a LlaMA model, **commercial use is prohibited**. This model's primary purpose is to stress test the limitations of composite LLMs and observe its performance with respect to other LLMs available on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).

## Recommended Prompt Format
Alpaca's instruction is the recommended prompt format, but Vicuna's instruction format may also work.
## Usage
To use GodziLLa-30B, you are required to provide attribution in accordance with the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. Please include the following attribution notice when utilizing GodziLLa-30B in your work:
```python
# This code uses GodziLLa-30B, a language model developed by Maya Philippines.
# The model is licensed under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.
# For more information, visit: https://creativecommons.org/licenses/by-nc/4.0/
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MayaPH/GodziLLa-30B")
model = AutoModelForCausalLM.from_pretrained("MayaPH/GodziLLa-30B")
```
Please ensure that you include the relevant attribution notice in your code or any other form of usage and restrict your usage to non-commercial use to comply with the license terms.
## Ethical Considerations
When using GodziLLa-30B, it is important to consider the following ethical considerations:
1. **Privacy and Security:** Avoid sharing sensitive personal information while interacting with the model. The model does not have privacy safeguards, so exercise caution when discussing personal or confidential matters.
2. **Fairness and Bias:** The model's responses may reflect biases present in the training data. Be aware of potential biases and make an effort to evaluate responses critically and fairly.
3. **Transparency:** The model operates as a predictive text generator based on patterns learned from the training data. The model's inner workings and the specific training data used are proprietary and not publicly available.
4. **User Responsibility:** Users should take responsibility for their own decisions and not solely rely on the information provided by the model. Consult with the appropriate professionals or reliable sources for specific advice or recommendations.
5. **NSFW Content:** The model is a merge of multiple model checkpoints and LoRA adapters. It is highly likely that the resulting model contains uncensored content that may include, but is not limited to, violence, gore, explicit language, and sexual content. If you plan to further refine this model for safe/aligned usage, you are highly encouraged to implement guardrails along with it.
## Further Information
For additional information or inquiries about GodziLLa-30B, please contact the Maya Philippines iOps Team via jasper.catapang@maya.ph.
## Disclaimer
GodziLLa-30B is an AI language model from Maya Philippines. It is provided "as is" without warranty of any kind, express or implied. The model developers and Maya Philippines shall not be liable for any direct or indirect damages arising from the use of this model.
## Acknowledgments
The development of GodziLLa-30B was made possible by Maya Philippines and the curation of the various proprietary datasets and creation of the different proprietary LoRA adapters.
|
Arindam75/a2c-PandaReachDense-v2
|
Arindam75
| 2023-07-10T09:27:33Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T09:24:51Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.60 +/- 0.59
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dangvansam/whisper-small-vi-finetuned-750h
|
dangvansam
| 2023-07-10T09:27:05Z | 75 | 2 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"vi",
"dataset:vivos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-08T03:57:40Z |
---
language:
- vi
license: apache-2.0
tags:
- whisper-event
datasets:
- vivos
metrics:
- wer
model-index:
- name: Whisper Small Vietnamese
results:
# - task:
# type: automatic-speech-recognition
# name: Automatic Speech Recognition
# dataset:
# name: mozilla-foundation/common_voice_11_0
# type: mozilla-foundation/common_voice_11_0
# config: vi
# split: test
# metrics:
# - type: wer
# value: 16.63
# name: WER
# - type: cer
# value: 7.74
# name: CER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: vivos
type: vivos
split: test
metrics:
- type: wer
value: 13.4
name: WER
# - type: cer
# value: 3.67
# name: CER
---
|
nolanaatama/stnmrshsthprkrvcv2300pchrhys
|
nolanaatama
| 2023-07-10T09:19:39Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-10T09:16:40Z |
---
license: creativeml-openrail-m
---
|
Phips/Taxi-v3
|
Phips
| 2023-07-10T09:16:11Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T09:02:12Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Phips/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Evan-Lin/Bart-RL-many-keywordmax-attractive
|
Evan-Lin
| 2023-07-10T09:15:33Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-09T20:09:53Z |
yelp - 20000
each word cos sim and keyword 1/4
attractive 1
entailment 0
|
crisU8/bert-finetuned-ner-clinical-plncmm-large-23
|
crisU8
| 2023-07-10T09:09:41Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T09:04:36Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-plncmm-large-23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-plncmm-large-23
This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2372
- Precision: 0.7614
- Recall: 0.8233
- F1: 0.7911
- Accuracy: 0.9322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 20
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.611 | 1.0 | 686 | 0.2341 | 0.7001 | 0.7997 | 0.7466 | 0.9248 |
| 0.2088 | 2.0 | 1372 | 0.2449 | 0.7406 | 0.8227 | 0.7795 | 0.9294 |
| 0.1203 | 3.0 | 2058 | 0.2372 | 0.7614 | 0.8233 | 0.7911 | 0.9322 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
come-st/donut_finetuning_macif
|
come-st
| 2023-07-10T09:07:10Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-07-03T13:45:40Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut_finetuning_macif
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut_finetuning_macif
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0537 | 1.0 | 2552 | 0.0519 |
| 0.0267 | 2.0 | 5104 | 0.0400 |
| 0.0096 | 3.0 | 7656 | 0.0392 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Shamalka/Robin_Gibb
|
Shamalka
| 2023-07-10T09:04:14Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2023-07-10T09:04:14Z |
---
license: bigscience-bloom-rail-1.0
---
|
nolanaatama/bra
|
nolanaatama
| 2023-07-10T08:52:35Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-07T05:43:30Z |
---
license: creativeml-openrail-m
---
|
crisU8/bert-finetuned-ner-clinical-plncmm-large-20
|
crisU8
| 2023-07-10T08:51:48Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T08:46:34Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-plncmm-large-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-plncmm-large-20
This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2349
- Precision: 0.7664
- Recall: 0.8337
- F1: 0.7986
- Accuracy: 0.9349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.6258 | 1.0 | 857 | 0.2335 | 0.7199 | 0.8041 | 0.7597 | 0.9257 |
| 0.1764 | 2.0 | 1714 | 0.2308 | 0.7521 | 0.8277 | 0.7881 | 0.9319 |
| 0.1117 | 3.0 | 2571 | 0.2349 | 0.7664 | 0.8337 | 0.7986 | 0.9349 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Aeala/Chronoboros-33b-4bit
|
Aeala
| 2023-07-10T08:48:19Z | 5 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T06:52:52Z |
4-bit GPTQ quantization of the [chronoboros-33b](https://huggingface.co/Henk717/chronoboros-33B) merge.
|
danbrown/checkpoints
|
danbrown
| 2023-07-10T08:43:17Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-10T08:20:33Z |
---
license: creativeml-openrail-m
---
This is a collection of Stable Diffusion model checkpoints, just like my other loras collection.
I may be listing it with more details here as I add the models.
The models here can be third-party checkpoints, or personal experiments.
|
gfx-labs/distilbert-base-uncased-finetuned-emotion
|
gfx-labs
| 2023-07-10T08:41:24Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-10T07:50:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9365
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1521
- Accuracy: 0.9365
- F1-score: 0.9368
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1-score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 0.1014 | 1.0 | 250 | 0.1598 | 0.9345 | 0.9340 |
| 0.064 | 2.0 | 500 | 0.1521 | 0.9365 | 0.9368 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
crisU8/bert-finetuned-ner-clinical-plncmm-large-17
|
crisU8
| 2023-07-10T08:26:17Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T08:22:41Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-plncmm-large-17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-plncmm-large-17
This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2342
- Precision: 0.7508
- Recall: 0.8216
- F1: 0.7846
- Accuracy: 0.9316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 350
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 429 | 0.2466 | 0.6765 | 0.8079 | 0.7364 | 0.9211 |
| 0.5428 | 2.0 | 858 | 0.2363 | 0.7439 | 0.8145 | 0.7776 | 0.9290 |
| 0.1798 | 3.0 | 1287 | 0.2342 | 0.7508 | 0.8216 | 0.7846 | 0.9316 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
crisU8/bert-finetuned-ner-clinical-plncmm-large-15
|
crisU8
| 2023-07-10T08:15:12Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T07:57:20Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-plncmm-large-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-plncmm-large-15
This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2339
- Precision: 0.7526
- Recall: 0.8282
- F1: 0.7886
- Accuracy: 0.9309
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 429 | 0.2466 | 0.6954 | 0.7958 | 0.7423 | 0.9223 |
| 0.5736 | 2.0 | 858 | 0.2380 | 0.7354 | 0.8178 | 0.7744 | 0.9264 |
| 0.1845 | 3.0 | 1287 | 0.2339 | 0.7526 | 0.8282 | 0.7886 | 0.9309 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
lloydchang/wongstein-vide-noir
|
lloydchang
| 2023-07-10T07:49:17Z | 207 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"text-generation-inference",
"en",
"dataset:amazon_us_reviews",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T07:43:16Z |
---
license: creativeml-openrail-m
datasets:
- amazon_us_reviews
language:
- en
tags:
- text-generation-inference
---
|
crisU8/bert-finetuned-ner-clinical-plncmm-large-13
|
crisU8
| 2023-07-10T07:39:09Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T07:21:14Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-plncmm-large-13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-plncmm-large-13
This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2336
- Precision: 0.7488
- Recall: 0.8227
- F1: 0.7840
- Accuracy: 0.9308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 429 | 0.2408 | 0.7044 | 0.8123 | 0.7545 | 0.9223 |
| 0.5338 | 2.0 | 858 | 0.2382 | 0.7322 | 0.8178 | 0.7726 | 0.9264 |
| 0.1771 | 3.0 | 1287 | 0.2336 | 0.7488 | 0.8227 | 0.7840 | 0.9308 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
HamZurger/ppo-LunarLander-v2
|
HamZurger
| 2023-07-10T07:37:02Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T07:36:44Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.22 +/- 11.20
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NasimB/gpt2-concat-guten-mod-rm-ref-2k-rarity-2p5k-p13k
|
NasimB
| 2023-07-10T07:28:38Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T05:33:27Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-guten-mod-rm-ref-2k-rarity-2p5k-p13k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-guten-mod-rm-ref-2k-rarity-2p5k-p13k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1752
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7031 | 0.29 | 500 | 5.6491 |
| 5.3381 | 0.59 | 1000 | 5.2075 |
| 4.9932 | 0.88 | 1500 | 4.9530 |
| 4.7151 | 1.17 | 2000 | 4.8058 |
| 4.5556 | 1.46 | 2500 | 4.6811 |
| 4.4516 | 1.76 | 3000 | 4.5752 |
| 4.3291 | 2.05 | 3500 | 4.4930 |
| 4.1341 | 2.34 | 4000 | 4.4457 |
| 4.1006 | 2.63 | 4500 | 4.3896 |
| 4.0621 | 2.93 | 5000 | 4.3371 |
| 3.8509 | 3.22 | 5500 | 4.3335 |
| 3.8058 | 3.51 | 6000 | 4.2974 |
| 3.7835 | 3.81 | 6500 | 4.2701 |
| 3.6851 | 4.1 | 7000 | 4.2656 |
| 3.5155 | 4.39 | 7500 | 4.2594 |
| 3.5136 | 4.68 | 8000 | 4.2428 |
| 3.5037 | 4.98 | 8500 | 4.2302 |
| 3.3411 | 5.27 | 9000 | 4.2422 |
| 3.321 | 5.56 | 9500 | 4.2417 |
| 3.323 | 5.85 | 10000 | 4.2412 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
chizhikchi/sci-five-radsum23
|
chizhikchi
| 2023-07-10T07:27:40Z | 209 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"medical",
"clinical",
"en",
"dataset:MIMIC-III",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-05-02T10:22:12Z |
---
license: afl-3.0
tags:
- summarization
- t5
- medical
- clinical
language: en
datasets:
- MIMIC-III
widget:
- again noted is the large intraparenchymal hemorrhage in the posterior right frontal lobe with extension into both lateral ventricles. the degree of surrounding edema and effacement of adjacent sulci is unchanged. there is minor contralateral shift of normal midline structures. the ventricular size is unchanged. subarachnoid blood is now seen in the left frontal and parietal lobes, likely due to recirculation of the ventricular blood.
- a least two attempts were made at imaging, however, the study remains severely limited by patient motion. minimal hyperdensity tracks along a left parietal sulcus (2a:18) is equivocal for a small subarachnoid hemorhage. there is no large mass effect detected. there is no shift of normally midline structures. a minimally displaced zygomatic fracture is present (2a:9). the middle ear cavities, mastoid air cells are clear. there is extensive soft tissue swelling overlying the right frontal calvarium with swelling extending to the right preseptal soft tissues (2a:12). there is mild - moderate mucosal thickening within the ethmoid and maxillary sinuses with some fluid and fluid mucosal thickening in the sphenoid sinus.
inference:
parameters:
max_length: 350
metrics:
- rouge-l
---
# Impression section Generator For Radiology Reports 🏥
This model is is the result of participation of SINAI team in [Task 1B: Radiology Report Summarization](https://vilmedic.app/misc/bionlp23/sharedtask) at the BioNLP workshop held on ACL 2023.
The goal of this task is to foster development of automatic radiology report summarization systems and expanding their applicability by incorporating seven different modalities and anatomies in the provided data.
We propose to automate the generation of radiology impressions with "sequence-to-sequence" learning that leverages the power of publicly available pre-trained models, both general domain and biomedical domain-specific.
This repository provides access to our best-performing system that resulted from fine-tuning of [Sci-Five base](https://huggingface.co/razent/SciFive-base-Pubmed_PMC), which is T5 model trained for extra 200k steps to optimize it in the context of biomedical literature.
# Results
The official evaluation results prove that adaptation of a general-domain system for biomedical literature is beneficial for the subsequent fine-tuning for radiology report summarization task. The Table below summarizes the official scores obtained by this model during the official evaluation. Team standings re available [here](https://vilmedic.app/misc/bionlp23/leaderboard/).
| BLEU4 | ROUGE-L | BERTscore | F1-RadGraph
|-----------|--------|----------|----------|
| 017.38 | 32.32 | 55.04 | 33.96 |
# System description paper and citation
The paper with the detailed description of the system is published in the [Proceedings of the 22st Workshop on Biomedical Language Processing](https://aclanthology.org/2023.bionlp-1.53/).
BibTeX citation:
```
@inproceedings{chizhikova-etal-2023-sinai,
title = "{SINAI} at {R}ad{S}um23: Radiology Report Summarization Based on Domain-Specific Sequence-To-Sequence Transformer Model",
author = "Chizhikova, Mariia and
Diaz-Galiano, Manuel and
Urena-Lopez, L. Alfonso and
Martin-Valdivia, M. Teresa",
booktitle = "The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bionlp-1.53",
pages = "530--534",
abstract = "This paper covers participation of the SINAI team in the shared task 1B: Radiology Report Summarization at the BioNLP workshop held on ACL 2023. Our proposal follows a sequence-to-sequence approach which leverages pre-trained multilingual general domain and monolingual biomedical domain pre-trained language models. The best performing system based on domain-specific model reached 33.96 F1RadGraph score which is the fourth best result among the challenge participants. This model was made publicly available on HuggingFace. We also describe an attempt of Proximal Policy Optimization Reinforcement Learning that was made in order to improve the factual correctness measured with F1RadGraph but did not lead to satisfactory results.",
}
```
|
zhundred/q-FrozenLake-v1-4x4-noSlippery
|
zhundred
| 2023-07-10T07:21:40Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T07:21:37Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="zhundred/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
crisU8/bert-finetuned-ner-clinical-plncmm-large-12
|
crisU8
| 2023-07-10T07:20:51Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-10T07:02:30Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-clinical-plncmm-large-12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-clinical-plncmm-large-12
This model is a fine-tuned version of [plncmm/beto-clinical-wl-es](https://huggingface.co/plncmm/beto-clinical-wl-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2372
- Precision: 0.7453
- Recall: 0.8189
- F1: 0.7803
- Accuracy: 0.9300
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 429 | 0.2478 | 0.68 | 0.7838 | 0.7282 | 0.9203 |
| 0.6268 | 2.0 | 858 | 0.2396 | 0.7336 | 0.8117 | 0.7707 | 0.9268 |
| 0.1931 | 3.0 | 1287 | 0.2372 | 0.7453 | 0.8189 | 0.7803 | 0.9300 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Sukmin/unity-mlagents-Pyramids
|
Sukmin
| 2023-07-10T07:14:45Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-07-10T07:14:42Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Sukmin/unity-mlagents-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
zwtharry/ppo-Huggy
|
zwtharry
| 2023-07-10T07:13:03Z | 14 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-10T07:12:52Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: zwtharry/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
yuuhan/roberta-base-mnli-lora-layer0-5
|
yuuhan
| 2023-07-10T07:04:20Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-07T02:16:35Z |
---
library_name: peft
---
## Training procedure
### Framework versions
MNLI acc: 0.8556291390728477
- PEFT 0.4.0.dev0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.