modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-29 00:38:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 525
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-29 00:38:28
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
EzraWilliam/WAVLM_TITML_IDN_model
|
EzraWilliam
| 2023-10-25T14:54:11Z | 19 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-10-22T12:02:54Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- accuracy
model-index:
- name: WAVLM_TITML_IDN_model
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: audiofolder
type: audiofolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8181137724550899
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# WAVLM_TITML_IDN_model
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7585
- Accuracy: 0.8181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 8.0217 | 0.98 | 31 | 7.7416 | 0.0472 |
| 5.1076 | 2.0 | 63 | 3.5170 | 0.0472 |
| 3.0131 | 2.98 | 94 | 2.9921 | 0.0876 |
| 3.0119 | 4.0 | 126 | 2.9580 | 0.0928 |
| 2.685 | 4.98 | 157 | 2.6591 | 0.0793 |
| 2.4513 | 6.0 | 189 | 2.3831 | 0.1257 |
| 2.4415 | 6.98 | 220 | 2.3518 | 0.1415 |
| 2.2998 | 8.0 | 252 | 2.2327 | 0.1864 |
| 2.1987 | 8.98 | 283 | 2.1297 | 0.1549 |
| 2.1206 | 10.0 | 315 | 2.0529 | 0.2118 |
| 2.0542 | 10.98 | 346 | 1.9592 | 0.2507 |
| 1.9693 | 12.0 | 378 | 1.8652 | 0.2792 |
| 1.8677 | 12.98 | 409 | 1.7811 | 0.3668 |
| 1.7369 | 14.0 | 441 | 1.7902 | 0.2493 |
| 1.6551 | 14.98 | 472 | 1.6558 | 0.3406 |
| 1.6176 | 16.0 | 504 | 1.5724 | 0.3585 |
| 1.5666 | 16.98 | 535 | 1.5822 | 0.4207 |
| 1.5103 | 18.0 | 567 | 1.5028 | 0.4379 |
| 1.4695 | 18.98 | 598 | 1.4276 | 0.4970 |
| 1.3016 | 20.0 | 630 | 1.3621 | 0.4798 |
| 1.2025 | 20.98 | 661 | 1.2016 | 0.5778 |
| 1.1211 | 22.0 | 693 | 1.2346 | 0.5644 |
| 1.0204 | 22.98 | 724 | 1.0743 | 0.6445 |
| 0.9365 | 24.0 | 756 | 1.0121 | 0.6759 |
| 0.8553 | 24.98 | 787 | 0.9246 | 0.7290 |
| 0.7698 | 26.0 | 819 | 0.8603 | 0.7612 |
| 0.7336 | 26.98 | 850 | 0.8072 | 0.7867 |
| 0.6965 | 28.0 | 882 | 0.7770 | 0.8009 |
| 0.6662 | 28.98 | 913 | 0.7640 | 0.8136 |
| 0.63 | 29.52 | 930 | 0.7585 | 0.8181 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
DylanonWic/mT5_summarize_th_en
|
DylanonWic
| 2023-10-25T14:32:40Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-10-24T12:42:13Z |
---
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
- label_smoothing_factor: 0.1
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.14.6
- Tokenizers 0.13.3
|
qmeeus/whisper-small-keyword-spotting-m-agv-grabo
|
qmeeus
| 2023-10-25T14:30:22Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"audio-classification",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-09-05T13:08:26Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- audio-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: whisper-small-keyword-spotting-m-agv-grabo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-keyword-spotting-m-agv-grabo
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the kw-spotting-fsc-sl-agv dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1700
- Accuracy: 0.9227
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2841 | 1.0 | 723 | 0.1700 | 0.9227 |
| 0.1826 | 2.0 | 1446 | 0.2144 | 0.9147 |
| 0.2425 | 3.0 | 2169 | 0.1962 | 0.9144 |
| 0.164 | 4.0 | 2892 | 0.3100 | 0.8865 |
| 0.1525 | 5.0 | 3615 | 0.2824 | 0.9000 |
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
okigan/donut-cord-v2
|
okigan
| 2023-10-25T14:24:47Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"image-to-text",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2023-10-25T03:18:52Z |
---
pipeline_tag: image-to-text
---
|
dvlamis/evilbot-mistral-7b-8bit-gguf
|
dvlamis
| 2023-10-25T14:20:21Z | 4 | 0 | null |
[
"gguf",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-10-24T23:26:06Z |
---
license: apache-2.0
language:
- en
---
Prompt template:
" ### The following is a conversation with the worlds meanest assistant: ### Human: {user_message}### Assistant: {generated_response} "
|
mgoin/Nous-Hermes-llama-2-7b-pruned50-quant-pt
|
mgoin
| 2023-10-25T14:18:58Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-25T14:02:11Z |
```
python ~/code/sparseml/src/sparseml/transformers/sparsification/obcq/obcq.py NousResearch/Nous-Hermes-llama-2-7b open_platypus --recipe ~/code/sparseml/src/sparseml/transformers/sparsification/obcq/llama-hermes-skip5-50p.yaml --eval wikitext2 --save True
2023-10-25 09:52:15 sparseml.modifiers.obcq.utils.helpers INFO Evaluating perplexity...
2023-10-25 09:52:43 sparseml.modifiers.obcq.utils.helpers INFO tensor(7.5703, device='cuda:0')
2023-10-25 09:53:17 sparseml.modifiers.obcq.utils.helpers INFO tensor(8.2167, device='cuda:0')
2023-10-25 09:53:49 sparseml.modifiers.obcq.utils.helpers INFO tensor(7.5748, device='cuda:0')
2023-10-25 09:54:21 sparseml.modifiers.obcq.utils.helpers INFO tensor(7.7005, device='cuda:0')
2023-10-25 09:54:53 sparseml.modifiers.obcq.utils.helpers INFO tensor(7.5876, device='cuda:0')
2023-10-25 09:55:25 sparseml.modifiers.obcq.utils.helpers INFO tensor(7.5569, device='cuda:0')
2023-10-25 09:55:56 sparseml.modifiers.obcq.utils.helpers INFO tensor(7.7773, device='cuda:0')
2023-10-25 09:56:27 sparseml.modifiers.obcq.utils.helpers INFO tensor(7.7879, device='cuda:0')
2023-10-25 09:56:59 sparseml.modifiers.obcq.utils.helpers INFO tensor(7.7593, device='cuda:0')
2023-10-25 09:57:31 sparseml.modifiers.obcq.utils.helpers INFO tensor(7.6666, device='cuda:0')
2023-10-25 09:57:55 sparseml.modifiers.obcq.utils.helpers INFO tensor(7.7227, device='cuda:0')
2023-10-25 09:57:55 sparseml.modifiers.obcq.utils.helpers INFO Perplexity: 7.722705
```
|
sainteye/ifoodie-rating-reset-v9
|
sainteye
| 2023-10-25T14:00:16Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-25T14:00:11Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: ifoodie-rating-reset-v9
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9659090638160706
---
# ifoodie-rating-reset-v9
['優質', '差', '普通']
## Example Images
# #### 優質
# 
#
# #### 差
# 
#
# #### 普通
# 
#
|
c123ian/phi_test_mcq
|
c123ian
| 2023-10-25T13:54:28Z | 1 | 0 |
peft
|
[
"peft",
"pytorch",
"mixformer-sequential",
"custom_code",
"region:us"
] | null | 2023-10-25T13:51:14Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
dmc2334/use_data_finetuning
|
dmc2334
| 2023-10-25T13:52:07Z | 30 | 0 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-10-25T12:22:26Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: use_data_finetuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# use_data_finetuning
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Stanislav9801/Taxi-v3
|
Stanislav9801
| 2023-10-25T13:47:09Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-25T13:47:06Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Stanislav9801/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
chaubz7/llama2-function-calling-sharded
|
chaubz7
| 2023-10-25T13:41:59Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-25T13:41:01Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
devvanshhh/flan-search-terms
|
devvanshhh
| 2023-10-25T13:37:50Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-10-25T11:22:26Z |
---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-search-terms
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-search-terms
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0077
- Rouge1: 5.2275
- Rouge2: 4.3657
- Rougel: 4.8365
- Rougelsum: 4.8414
- Gen Len: 8.819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0301 | 1.0 | 1500 | 0.0188 | 6.3132 | 5.5233 | 5.9306 | 5.9285 | 4.9403 |
| 0.0167 | 2.0 | 3000 | 0.0118 | 5.6248 | 4.8018 | 5.2386 | 5.238 | 7.2617 |
| 0.0136 | 3.0 | 4500 | 0.0093 | 5.2543 | 4.4034 | 4.8643 | 4.8675 | 8.599 |
| 0.0114 | 4.0 | 6000 | 0.0080 | 5.3013 | 4.448 | 4.9117 | 4.9183 | 8.5447 |
| 0.0105 | 5.0 | 7500 | 0.0077 | 5.2275 | 4.3657 | 4.8365 | 4.8414 | 8.819 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
nyoo827/use_data_finetuning
|
nyoo827
| 2023-10-25T13:33:09Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-10-25T02:21:02Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: use_data_finetuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# use_data_finetuning
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
mmnga/japanese-stablelm-3b-4e1t-base-gguf
|
mmnga
| 2023-10-25T13:32:31Z | 243 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-10-25T12:43:58Z |
---
license: apache-2.0
---
# japanese-stablelm-3b-4e1t-base-gguf
[stabilityaiさんが公開しているjapanese-stablelm-3b-4e1t-base](https://huggingface.co/stabilityai/japanese-stablelm-3b-4e1t-base)のggufフォーマット変換版です。
こちら[本家にサポートPR](https://github.com/ggerganov/llama.cpp/pull/3586)がありますので、本家で将来的に実行可能になると思います。
他のモデルはこちら
3bモデル
[mmnga/japanese-stablelm-3b-4e1t-base-gguf](https://huggingface.co/mmnga/japanese-stablelm-3b-4e1t-base-gguf)
[mmnga/japanese-stablelm-3b-4e1t-instruct-gguf](https://huggingface.co/mmnga/japanese-stablelm-3b-4e1t-instruct-gguf)
7bモデル
[mmnga/japanese-stablelm-base-gamma-7b-gguf](https://huggingface.co/mmnga/japanese-stablelm-base-gamma-7b-gguf)
[mmnga/japanese-stablelm-instruct-gamma-7b-gguf](https://huggingface.co/mmnga/japanese-stablelm-instruct-gamma-7b-gguf)
## Usage
```
git clone --branch mmnga-dev-merge https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'japanese-stablelm-3b-4e1t-base-q4_0.gguf' -n 128 -p '今夜の晩御飯のレシピをご紹介します。'
```
|
A-Funakoshi/bert-multilingual-sentiments-base
|
A-Funakoshi
| 2023-10-25T13:25:54Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"ja",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-25T07:49:01Z |
---
language:
- ja
metrics:
- accuracy
- f1
---
# multilingual-sentimentsデータセットをbertベースのモデルでfinetuningしたもの
- ベースモデル:cl-tohoku/bert-base-japanese-whole-word-masking
- データセット:tyqiangz/multilingual-sentiments
- 学習率スケジュールタイプ(lr_scheduler_type): constant
- 学習率(learning rate): 2e-5
- 勾配累積ステップ(gradient_accumulation_steps): なし
- 正則化(weight_decay): なし
- Epoch: 100
- EarlyStopping: early_stopping_patience=3
|
TanmaySah/m3
|
TanmaySah
| 2023-10-25T13:22:45Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-25T12:53:13Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
chengyineng/bloom_prompt_tuning_1698239881.7069845
|
chengyineng
| 2023-10-25T13:18:06Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-25T13:18:05Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
asas-ai/jais-13b-chat-8bit
|
asas-ai
| 2023-10-25T13:17:23Z | 10 | 2 |
transformers
|
[
"transformers",
"pytorch",
"jais",
"text-generation",
"Arabic",
"English",
"LLM",
"Decoder",
"causal-lm",
"conversational",
"custom_code",
"ar",
"en",
"arxiv:2308.16149",
"license:apache-2.0",
"autotrain_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2023-09-10T02:26:33Z |
---
language:
- ar
- en
thumbnail: null
tags:
- Arabic
- English
- LLM
- Decoder
- causal-lm
license: apache-2.0
pipeline_tag: conversational
---
# Jais-13b-chat
<!-- Provide a quick summary of what the model is/does. -->
This is a 13 billion parameter fine-tuned bilingual large language model for both Arabic and English.
It is based on transformer-based decoder-only (GPT-3) architecture and uses SwiGLU
non-linearity. It implements ALiBi position embeddings, enabling the model to extrapolate
to long sequence lengths, providing improved context handling and model precision.
Jais-13b-chat is [Jais-13b](https://huggingface.co/inception-mbzuai/jais-13b) fine-tuned over a curated set of 4 million Arabic and 6 million English prompt-response pairs.
We further fine-tune our model with safety-oriented instruction, as well as providing extra guardrails in the
form of a safety prompt. Our pre-trained model, [Jais-13b](https://huggingface.co/inception-mbzuai/jais-13b), is trained on
116 billion Arabic tokens and 279 billion English tokens.
The combination of the largest curated Arabic and English instruction tuning dataset along with the addition of multi-turn conversations allows the model to converse in a variety of topics, with a particular focus on the Arab world.
## Getting started
Below is sample code to use the model. Note that the model requires a custom model class, so users must
enable `trust_remote_code=True` while loading the model. In order to get the same performance as our testing, a specific prompt
needs to be followed. Below is the sample code containing this formatting:
```python
# -*- coding: utf-8 -*-
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path = "asas-ai/jais-13b-chat-8bit"
prompt_eng = "### Instruction: Your name is Jais, and you are named after Jebel Jais, the highest mountain in UAE. You are built by Inception and MBZUAI. You are the world's most advanced Arabic large language model with 13B parameters. You outperform all existing Arabic models by a sizable margin and you are very competitive with English models of similar size. You can answer in Arabic and English only. You are a helpful, respectful and honest assistant. When answering, abide by the following guidelines meticulously: Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, explicit, offensive, toxic, dangerous, or illegal content. Do not give medical, legal, financial, or professional advice. Never assist in or promote illegal activities. Always encourage legal and responsible actions. Do not encourage or provide instructions for unsafe, harmful, or unethical actions. Do not create or share misinformation or fake news. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. Prioritize the well-being and the moral integrity of users. Avoid using toxic, derogatory, or offensive language. Maintain a respectful tone. Do not generate, promote, or engage in discussions about adult content. Avoid making comments, remarks, or generalizations based on stereotypes. Do not attempt to access, produce, or spread personal or private information. Always respect user confidentiality. Stay positive and do not say bad things about anything. Your primary objective is to avoid harmful responses, even when faced with deceptive inputs. Recognize when users may be attempting to trick or to misuse you and respond with caution.\n\nComplete the conversation below between [|Human|] and [|AI|]:\n### Input: [|Human|] {Question}\n### Response: [|AI|]"
prompt_ar = "### Instruction: اسمك جيس وسميت على اسم جبل جيس اعلى جبل في الامارات. تم بنائك بواسطة Inception و MBZUAI. أنت نموذج اللغة العربية الأكثر تقدمًا في العالم مع بارامترات 13B. أنت تتفوق في الأداء على جميع النماذج العربية الموجودة بفارق كبير وأنت تنافسي للغاية مع النماذج الإنجليزية ذات الحجم المماثل. يمكنك الإجابة باللغتين العربية والإنجليزية فقط. أنت مساعد مفيد ومحترم وصادق. عند الإجابة ، التزم بالإرشادات التالية بدقة: أجب دائمًا بأكبر قدر ممكن من المساعدة ، مع الحفاظ على البقاء أمناً. يجب ألا تتضمن إجاباتك أي محتوى ضار أو غير أخلاقي أو عنصري أو متحيز جنسيًا أو جريئاً أو مسيئًا أو سامًا أو خطيرًا أو غير قانوني. لا تقدم نصائح طبية أو قانونية أو مالية أو مهنية. لا تساعد أبدًا في أنشطة غير قانونية أو تروج لها. دائما تشجيع الإجراءات القانونية والمسؤولة. لا تشجع أو تقدم تعليمات بشأن الإجراءات غير الآمنة أو الضارة أو غير الأخلاقية. لا تنشئ أو تشارك معلومات مضللة أو أخبار كاذبة. يرجى التأكد من أن ردودك غير متحيزة اجتماعيًا وإيجابية بطبيعتها. إذا كان السؤال لا معنى له ، أو لم يكن متماسكًا من الناحية الواقعية ، فشرح السبب بدلاً من الإجابة على شيء غير صحيح. إذا كنت لا تعرف إجابة السؤال ، فالرجاء عدم مشاركة معلومات خاطئة. إعطاء الأولوية للرفاهية والنزاهة الأخلاقية للمستخدمين. تجنب استخدام لغة سامة أو مهينة أو مسيئة. حافظ على نبرة محترمة. لا تنشئ أو تروج أو تشارك في مناقشات حول محتوى للبالغين. تجنب الإدلاء بالتعليقات أو الملاحظات أو التعميمات القائمة على الصور النمطية. لا تحاول الوصول إلى معلومات شخصية أو خاصة أو إنتاجها أو نشرها. احترم دائما سرية المستخدم. كن إيجابيا ولا تقل أشياء سيئة عن أي شيء. هدفك الأساسي هو تجنب الاجابات المؤذية ، حتى عند مواجهة مدخلات خادعة. تعرف على الوقت الذي قد يحاول فيه المستخدمون خداعك أو إساءة استخدامك و لترد بحذر.\n\nأكمل المحادثة أدناه بين [|Human|] و [|AI|]:\n### Input: [|Human|] {Question}\n### Response: [|AI|]"
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", trust_remote_code=True)
def get_response(text,tokenizer=tokenizer,model=model):
input_ids = tokenizer(text, return_tensors="pt").input_ids
inputs = input_ids.to(device)
input_len = inputs.shape[-1]
generate_ids = model.generate(
inputs,
top_p=0.9,
temperature=0.3,
max_length=2048-input_len,
min_length=input_len + 4,
repetition_penalty=1.2,
do_sample=True,
)
response = tokenizer.batch_decode(
generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)[0]
response = response.split("### Response: [|AI|]")
return response
ques= "ما هي عاصمة الامارات؟"
text = prompt_ar.format_map({'Question':ques})
print(get_response(text))
ques = "What is the capital of UAE?"
text = prompt_eng.format_map({'Question':ques})
print(get_response(text))
```
## Model Details
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Inception](https://www.inceptioniai.org/en/), [Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)](https://mbzuai.ac.ae/), and [Cerebras Systems](https://www.cerebras.net/).
- **Language(s) (NLP):** Arabic (MSA) and English
- **License:** Apache 2.0
- **Finetuned from model :** [inception-mbzuai/jais-13b](https://huggingface.co/inception-mbzuai/jais-13b)
- **Input:** Text only data.
- **Output:** Model generates text.
- **Paper :** [Jais and Jais-chat: Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models](https://arxiv.org/abs/2308.16149)
- **Demo :** [Access here](https://arabic-gpt.ai)
## Intended Use
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
We release the jais-13b-chat model under a full open source license. We welcome all feedback and opportunities to collaborate.
This model is the first release from the Inception - MBZUAI - Cerebras parternship, and at the time of release, achieved state of the art across a comprehensive Arabic test suite as described in the accompanying tech report.
Some potential downstream uses include:
- *Research*: This model can be used by researchers and developers.
- *Commercial Use*: Jais-13b-chat can be directly used for chat with suitable prompting or further fine-tuned for specific use cases.
Some potential use cases include:
- Chat-assistants.
- Customer service.
Audiences that we hope will benefit from our model:
- *Academics*: For those researching Arabic natural language processing.
- *Businesses*: Companies targeting Arabic-speaking audiences.
- *Developers*: Those integrating Arabic language capabilities in apps.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
While jais-13b-chat is a powerful Arabic and English bilingual model, it's essential to understand its limitations and the potential of misuse.
It is prohibited to use the model in any manner that violates applicable laws or regulations.
The following are some example scenarios where the model should not be used.
- *Malicious Use*: The model should not be used for generating harmful, misleading, or inappropriate content. This includes but is not limited to:
- Generating or promoting hate speech, violence, or discrimination.
- Spreading misinformation or fake news.
- Engaging in or promoting illegal activities.
- *Sensitive Information*: The model should not be used to handle or generate personal, confidential, or sensitive information.
- *Generalization Across All Languages*: Jais-13b is bilingual and optimized for Arabic and English, it should not be assumed to have equal proficiency in other languages or dialects.
- *High-Stakes Decisions*: The model should not be used to make high-stakes decisions without human oversight. This includes medical, legal, financial, or safety-critical decisions.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The model is trained on publicly available data which was in part curated by Inception. We have employed different
techniqes to reduce bias in the model. While efforts have been made to minimize biases, it is likely that the model, as with all LLM models, will exhibit some bias.
The model is trained as an AI assistant for Arabic and English speakers. The model is limited to produce responses for queries in these two languages
and may not produce appropriate responses to other language queries.
By using Jais, you acknowledge and accept that, as with any large language model, it may generate incorrect, misleading and/or offensive information or content.
The information is not intended as advice and should not be relied upon in any way, nor are we responsible for any of the content or consequences resulting from its use.
We are continuously working to develop models with greater capabilities, and as such, welcome any feedback on the model
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
jais-13b-chat model is finetuned with both Arabic and English prompt-response pairs. We included a wide range of
instructional data across various domains. In total, our instruction-tuning
dataset has 3.8M and 5.9M prompt-response pairs for Arabic and English, respectively. For English, we used
publicly available instruction tuning datasets. For Arabic, we internally curated instruction data and augmented it with translated Arabic data.
Further details about the training data can be found in the technical report.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
In instruction tuning, each instance comprises a prompt and its corresponding response.
Padding is applied to each instance since, unlike pretraining, finetuning is done with unpacked data.
We utilize the same autoregressive objective as employed in the pretraining of the LLM.
However, we masked the loss on the prompt i.e. backpropagation is performed only on answer tokens.
The training process was performed on the Condor Galaxy 1 (CG-1) supercomputer platform.
#### Training Hyperparameters
| Hyperparameter | Value |
|----------------------------|----------------|
| Precision | fp32 |
| Optimizer | AdamW |
| Learning rate | 0 to 6.7e-04 (<= 400 steps) |
| | 6.7e-04 to 6.7e-05 (> 400 steps) |
| Weight decay | 0.1 |
| Batch size | 3392 |
| Steps | 8705 |
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
We conducted a comprehensive evaluation of Jais-chat and benchmarked it other leading base language models, focusing on both English and Arabic. The evaluation criteria spanned various dimensions, including:
- **Knowledge:** How well the model answers factual questions.
- **Reasoning:** The model's ability to answer questions requiring reasoning.
- **Misinformation/Bias:** Assessment of the model's susceptibility to generating false or misleading information, and its neutrality.
Arabic evaluation results:
| Models | Avg |EXAMS | MMLU (M) | LitQA | Hellaswag | PIQA | BoolQA | SituatedQA | ARC-C | OpenBookQA | TruthfulQA | CrowS-Pairs |
|-------------------|-------|------|----------|-------|-----------|------|--------|------------|-------|------------|------------|-------------|
| Jais-chat (13B) | **48.4** | 39.7 | 34.0 | 52.6 | 61.4 | 67.5 | 65.7 | 47.0 | 40.7 | 31.6 | 44.8 | 56.4 |
| BLOOMz (7.1B) | 42.9 | 34.9 | 31.0 | 44.0 | 38.1 | 59.1 | 66.6 | 42.8 | 30.2 | 29.2 | 48.4 | 55.8 |
| mT0-XXL (13B) | 40.9 | 31.5 | 31.2 | 36.6 | 33.9 | 56.1 | 77.8 | 44.7 | 26.1 | 27.8 | 44.5 | 45.3 |
| LLaMA2-Chat (13B) | 38.1 | 26.3 | 29.1 | 33.1 | 32.0 | 52.1 | 66.0 | 36.3 | 24.1 | 28.4 | 48.6 | 47.2 |
| AraBART (550M) | 36.7 | 26.5 | 27.5 | 34.3 | 28.1 | 52.6 | 57.1 | 34.6 | 25.1 | 28.6 | 49.8 | 48.8 |
| AraT5 (220M) | 32.0 | 24.7 | 23.8 | 26.3 | 25.5 | 50.4 | 58.2 | 33.9 | 24.7 | 25.4 | 20.9 | 47.2 |
All tasks above report accuracy or F1 scores (the higher the better). For the sake of brevity, we do not include results over English tasks.
Detailed comparisons in both languages and evaluation dataset details can be found in the technical report.
## Generation Example
<p align="center"> <img src="https://huggingface.co/inception-mbzuai/jais-13b/resolve/main/Rent_Example.png" width="600" /></p>
## Citation
```
@misc{sengupta2023jais,
title={Jais and Jais-chat: Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models},
author={Neha Sengupta and Sunil Kumar Sahu and Bokang Jia and Satheesh Katipomu and Haonan Li and Fajri Koto and Osama Mohammed Afzal and Samta Kamboj and Onkar Pandit and Rahul Pal and Lalit Pradhan and Zain Muhammad Mujahid and Massa Baali and Alham Fikri Aji and Zhengzhong Liu and Andy Hock and Andrew Feldman and Jonathan Lee and Andrew Jackson and Preslav Nakov and Timothy Baldwin and Eric Xing},
year={2023},
eprint={2308.16149},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Copyright Inception Institute of Artificial Intelligence Ltd.
|
chengyineng/bloom_prompt_tuning_1698239510.470763
|
chengyineng
| 2023-10-25T13:11:55Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-25T13:11:54Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
AlexMei/ppo-Huggy
|
AlexMei
| 2023-10-25T13:10:09Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-10-25T12:37:47Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: AlexMei/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AmrMorgado/Reinforce-PixelCopter
|
AmrMorgado
| 2023-10-25T13:09:41Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-12T20:58:36Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 36.80 +/- 29.63
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
sainteye/ifoodie-rating-reset-v8
|
sainteye
| 2023-10-25T13:01:21Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-25T13:01:18Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: ifoodie-rating-reset-v8
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9822485446929932
---
# ifoodie-rating-reset-v8
['優質', '差', '普通']
## Example Images
# #### 優質
# 
#
# #### 差
# 
#
# #### 普通
# 
#
|
Yntec/MeinaAlter
|
Yntec
| 2023-10-25T13:00:15Z | 301 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"Anime",
"Landscapes",
"Female",
"Meina",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-25T11:36:37Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Anime
- Landscapes
- Female
- Meina
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# Meina Alter
Original model page: https://civitai.com/models/20945?modelVersionId=24933
Sample and prompt:

summer vibrancy, Portrait of Cartoon pretty CUTE girl waiting for rain, anime fantasy illustration by tomoyuki yamasaki, DETAILED CHIBI EYES, kyoto studio, madhouse, trending on ufotable. artstation visible brushstrokes art By ROSSDRAWS and Ellen paige and charles sillem lidderdale and hayao miyazaki
|
devvanshhh/my-custom-repo2
|
devvanshhh
| 2023-10-25T12:56:25Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-10-25T10:49:18Z |
---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my-custom-repo2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-custom-repo2
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0076
- Rouge1: 5.2564
- Rouge2: 4.3885
- Rougel: 4.8606
- Rougelsum: 4.8628
- Gen Len: 8.7847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0301 | 1.0 | 1500 | 0.0186 | 6.1606 | 5.3745 | 5.7762 | 5.7752 | 5.4517 |
| 0.0169 | 2.0 | 3000 | 0.0115 | 5.5084 | 4.6793 | 5.1194 | 5.1214 | 7.6477 |
| 0.0136 | 3.0 | 4500 | 0.0090 | 5.2597 | 4.4065 | 4.8716 | 4.8723 | 8.557 |
| 0.0118 | 4.0 | 6000 | 0.0079 | 5.2273 | 4.3653 | 4.8349 | 4.8352 | 8.815 |
| 0.0107 | 5.0 | 7500 | 0.0076 | 5.2564 | 4.3885 | 4.8606 | 4.8628 | 8.7847 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
swjin/cppe5_use_data_finetuning
|
swjin
| 2023-10-25T12:49:32Z | 27 | 0 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-10-25T07:54:48Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: cppe5_use_data_finetuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cppe5_use_data_finetuning
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
100rab25/swin-tiny-patch4-window7-224-fraud_number_classification
|
100rab25
| 2023-10-25T12:49:16Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-25T12:33:25Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-fraud_number_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9962825278810409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-fraud_number_classification
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0107
- Accuracy: 0.9963
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0229 | 1.0 | 19 | 0.0516 | 0.9851 |
| 0.0193 | 2.0 | 38 | 0.0107 | 0.9963 |
| 0.0062 | 3.0 | 57 | 0.0275 | 0.9963 |
| 0.0172 | 4.0 | 76 | 0.0313 | 0.9963 |
| 0.028 | 5.0 | 95 | 0.0431 | 0.9926 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
ISTA-DASLab/switch-large-128_qmoe
|
ISTA-DASLab
| 2023-10-25T12:38:58Z | 3 | 0 |
transformers
|
[
"transformers",
"switch_transformers",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-10-24T14:58:41Z |
---
license: apache-2.0
---
# switch-large-128_qmoe
This is the [google/switch-large-128](https://huggingface.co/google/switch-large-128) model quantized with the QMoE framework to ternary precision and stored in the custom further compressed QMoE format.
Please see the [QMoE repository](https://github.com/IST-DASLab/qmoe) for how to use this model.
|
bhavya342/my_awesome_model
|
bhavya342
| 2023-10-25T12:38:46Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-25T12:37:06Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8863
- Accuracy: 0.0588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 5 | 2.0199 | 0.0588 |
| No log | 2.0 | 10 | 1.8863 | 0.0588 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
limerooster/vietnamese-llama-2-ngkplnltc-linear
|
limerooster
| 2023-10-25T12:38:18Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-25T12:37:54Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
ISTA-DASLab/switch-base-128_qmoe
|
ISTA-DASLab
| 2023-10-25T12:38:16Z | 5 | 1 |
transformers
|
[
"transformers",
"switch_transformers",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-10-24T15:54:18Z |
---
license: apache-2.0
---
# switch-base-128_qmoe
This is the [google/switch-base-128](https://huggingface.co/google/switch-base-128) model quantized with the QMoE framework to ternary precision and stored in the custom further compressed QMoE format.
Please see the [QMoE repository](https://github.com/IST-DASLab/qmoe) for how to use this model.
|
espnet/zuazo_commonvoice_asr_train_asr_transducer_conformer5_raw_eu_bpe150_sp
|
espnet
| 2023-10-25T12:18:49Z | 0 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"eu",
"dataset:commonvoice",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2023-10-25T12:18:38Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: eu
datasets:
- commonvoice
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/zuazo_commonvoice_asr_train_asr_transducer_conformer5_raw_eu_bpe150_sp`
This model was trained by Xabier de Zuazo using commonvoice recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 5d0758e2a7063b82d1f10a8ac2de98eb6cf8a352
pip install -e .
cd egs2/commonvoice/asr1.eu.transducer_conformer5
./run.sh --skip_data_prep false --skip_train true --download_model espnet/zuazo_commonvoice_asr_train_asr_transducer_conformer5_raw_eu_bpe150_sp
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Wed Oct 25 12:18:16 CEST 2023`
- python version: `3.8.17 | packaged by conda-forge | (default, Jun 16 2023, 07:06:00) [GCC 11.4.0]`
- espnet version: `espnet 202308`
- pytorch version: `pytorch 2.0.1`
- Git hash: `5d0758e2a7063b82d1f10a8ac2de98eb6cf8a352`
- Commit date: `Wed Aug 30 18:03:42 2023 -0400`
## exp/asr_train_asr_transducer_conformer5_raw_eu_bpe150_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_transducer_lm_lm_train_lm_eu_bpe150_valid.loss.ave_asr_model_valid.loss.ave/test_eu|6640|49267|92.9|6.6|0.5|0.8|8.0|33.6|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_transducer_lm_lm_train_lm_eu_bpe150_valid.loss.ave_asr_model_valid.loss.ave/test_eu|6640|373913|98.7|0.6|0.8|0.4|1.7|33.6|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_transducer_lm_lm_train_lm_eu_bpe150_valid.loss.ave_asr_model_valid.loss.ave/test_eu|6640|208360|97.3|1.5|1.2|0.5|3.2|33.6|
## exp/asr_train_asr_transducer_conformer5_raw_eu_bpe150_sp/decode_transducer_lm_lm_train_lm_eu_bpe150_valid.loss.ave_asr_model_valid.loss.ave
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/dev_eu|6640|49505|93.5|6.1|0.4|0.8|7.3|31.7|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/dev_eu|6640|376502|98.9|0.5|0.6|0.3|1.4|31.7|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/dev_eu|6640|209465|97.7|1.3|1.1|0.4|2.8|31.7|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_transducer_conformer5.yaml
print_config: false
log_level: INFO
drop_last_iter: false
dry_run: false
iterator_type: sequence
valid_iterator_type: null
output_dir: exp/asr_train_asr_transducer_conformer5_raw_eu_bpe150_sp
ngpu: 1
seed: 2022
num_workers: 8
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 3
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 10000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_eu_bpe150_sp/train/speech_shape
- exp/asr_stats_raw_eu_bpe150_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_eu_bpe150_sp/valid/speech_shape
- exp/asr_stats_raw_eu_bpe150_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
shuffle_within_batch: false
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
train_data_path_and_name_and_type:
- - dump/raw/train_eu_sp/wav.scp
- speech
- sound
- - dump/raw/train_eu_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_eu/wav.scp
- speech
- sound
- - dump/raw/dev_eu/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 4.0
scheduler: noamlr
scheduler_conf:
model_size: 256
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- A
- ▁
- I
- E
- Z
- .
- R
- N
- U
- S
- O
- T
- KO
- K
- ▁E
- TU
- TE
- RA
- EN
- L
- ','
- LA
- TA
- AK
- ▁A
- AN
- ▁DA
- RE
- KA
- P
- GO
- IN
- B
- M
- ▁DU
- RI
- GU
- ▁ETA
- D
- ER
- UR
- ▁BA
- ▁P
- H
- MA
- ▁G
- ▁I
- ▁HA
- TZEN
- LE
- ▁EZ
- ▁O
- EK
- GI
- ▁BAT
- DA
- DU
- TZA
- KI
- DI
- RO
- ▁GA
- REN
- AR
- TEN
- GA
- TIK
- RRI
- ▁BI
- LI
- ▁BER
- G
- ▁AR
- TO
- ERA
- AREN
- ▁ZI
- ▁DE
- ▁BE
- X
- BA
- ▁DI
- ▁IZAN
- ▁ZE
- ETAN
- ▁ZEN
- EAN
- IA
- ▁JA
- ▁ERE
- ▁DITU
- ▁ZA
- ▁ERA
- LO
- ▁HOR
- NTZ
- ▁DIRA
- MEN
- ▁HI
- ▁F
- F
- LDE
- ZIO
- '?'
- ▁ZU
- '-'
- DO
- ▁EGIN
- TZEKO
- ▁BEHAR
- TZI
- BIL
- ▁IN
- RIK
- ▁HORI
- ▁SA
- ▁NA
- BIDE
- ▁KON
- ▁HE
- ▁ZUEN
- ▁MU
- ▁BESTE
- ▁SO
- ▁HERRI
- ▁IKAS
- ▁NO
- ▁ALD
- ▁NI
- ▁TX
- ABE
- KETA
- ▁BAINA
- C
- '!'
- V
- Y
- ':'
- ;
- '"'
- Í
- Q
- Ñ
- W
- J
- ‘
- ’
- <sos/eos>
init: xavier_uniform
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf:
joint_space_size: 640
use_preprocessor: true
token_type: bpe
bpemodel: data/eu_token_list/bpe_unigram150/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
aux_ctc_tasks: []
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_eu_bpe150_sp/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
report_cer: true
report_wer: true
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
input_layer: conv2d
num_blocks: 12
linear_units: 2048
dropout_rate: 0.1
output_size: 256
attention_heads: 4
attention_dropout_rate: 0.0
positional_dropout_rate: 0.1
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
rel_pos_type: latest
macaron_style: true
use_cnn_module: true
cnn_module_kernel: 15
postencoder: null
postencoder_conf: {}
decoder: transducer
decoder_conf:
rnn_type: lstm
num_layers: 1
hidden_size: 640
dropout: 0.1
dropout_embed: 0.2
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202308'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Acly/hat
|
Acly
| 2023-10-25T12:16:31Z | 0 | 2 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-10-25T12:08:31Z |
---
license: apache-2.0
---
# HAT Super-Resolution models
This mirrors some models from "HAT: Hybrid Attention Transformer for Image Restoration" by Xiangyu Chen, Xintao Wang, Wenlong Zhang, Xiangtao Kong, Jiantao Zhou and Chao Dong
* Original repository: [XPixelGroup/HAT (Github)](https://github.com/XPixelGroup/HAT)
* Original weights: [HAT (Google Drive)](https://drive.google.com/drive/folders/1HpmReFfoUqUbnAOQ7rvOeNU3uf_m69w0)
|
camillebri/mapsmmmm
|
camillebri
| 2023-10-25T12:12:28Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-10-25T12:12:27Z |
# Model Card for None
**architecture**: Conv4_FC3
**multi_network**: False
**dropout**: 0.0
**latent_space_dimension**: 64
**latent_space_size**: 2
**selection_metrics**: ['loss']
**label**: diagnosis
**selection_threshold**: 0.0
**gpu**: True
**n_proc**: 32
**batch_size**: 32
**evaluation_steps**: 20
**seed**: 0
**deterministic**: False
**compensation**: memory
**transfer_path**: ../../autoencoders/exp3/maps
**transfer_selection_metric**: loss
**use_extracted_features**: False
**multi_cohort**: False
**diagnoses**: ['AD', 'CN']
**baseline**: True
**normalize**: True
**data_augmentation**: False
**sampler**: random
**n_splits**: 5
**epochs**: 200
**learning_rate**: 1e-05
**weight_decay**: 0.0001
**patience**: 10
**tolerance**: 0.0
**accumulation_steps**: 1
**optimizer**: Adam
**preprocessing_dict**: {'preprocessing': 't1-linear', 'mode': 'roi', 'use_uncropped_image': False, 'roi_list': ['leftHippocampusBox', 'rightHippocampusBox'], 'uncropped_roi': False, 'prepare_dl': False, 'file_type': {'pattern': '*space-MNI152NLin2009cSym_desc-Crop_res-1x1x1_T1w.nii.gz', 'description': 'T1W Image registered using t1-linear and cropped (matrix size 169×208×179, 1 mm isotropic voxels)', 'needed_pipeline': 't1-linear'}}
**mode**: roi
**network_task**: classification
**caps_directory**: $WORK/../commun/datasets/adni/caps/caps_v2021
**tsv_path**: $WORK/Aramis_tools/ClinicaDL_tools/experiments_ADDL/data/ADNI/train
**validation**: KFoldSplit
**num_networks**: 2
**label_code**: {'AD': 0, 'CN': 1}
**output_size**: 2
**input_size**: [1, 50, 50, 50]
**loss**: None
|
jondurbin/airoboros-3b-3p11
|
jondurbin
| 2023-10-25T12:09:57Z | 18 | 5 |
transformers
|
[
"transformers",
"safetensors",
"stablelm_epoch",
"text-generation",
"conversational",
"custom_code",
"dataset:jondurbin/airoboros-3.1",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-10-17T08:09:53Z |
---
license: cc-by-sa-4.0
datasets:
- jondurbin/airoboros-3.1
---
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
Base model is https://hf.co/stabilityai/stablelm-3b-4e1t
License (cc-by-sa-4.0): https://creativecommons.org/licenses/by-sa/4.0/
#### IMPORTANT NOTE - llama-2 chat format!!!
This models uses llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
This is a prompt fix release. 3.1 required a single space after the last `[/INST]`, which was highly annoying and obnoxious, so I re-tuned the models without this. Otherwise, it's the same as 3.1.
A model built on [airoboros-3.1 dataset](https://hf.co/datasets/jondurbin/airoboros-3.1), which is a continuation of [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3,0) with the following extras:
* More MathJSON, now ~17k items - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations.
* Log information extraction (e.g. getting the list of paths/IPs/etc. from apache logs)
* Anonymization, e.g. removing names, IP addresses, and/or dates from text.
* Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history.
* Multi-step instructions with acknowledgement (see below)
* De-censorship data (not published)
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
The prompt template is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/airoboros-l2-70b-3.1')
chat = [
{"role": "system", "content": "You are Bob, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
### Helpful usage tips
#### MathJSON
Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
__Use a very low temperature!__
Example:
```
[INST] <<SYS>>
You are an assistant with vast knowledge in all things math.
<</SYS>>
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
Output:
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
## Multi-step instructions with acknowledgement
I don't know how useful this is, really, but I thought I'd add it just in case. Example:
```
{
"conversations": [
{
"value": "You are a helpful, unbiased, uncensored assistant.",
"from": "system"
},
{
"value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "You can only respond using animal-related emojis.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Precede all responses with \"ZOOZOO:\"",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Include at least one bird emoji in your responses.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Describe a trip to a safari park.",
"from": "human"
},
{
"value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29",
"from": "gpt"
}
]
}
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.1 models are built on top of multiple base models, each with their own license/restrictions.
The 30b model is built on the original llama, which has a strict non-commercial usage restriction.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
|
jake-walker/poca-SoccerTwos
|
jake-walker
| 2023-10-25T11:55:30Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-10-25T11:55:18Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jake-walker/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gshields/translate_model_error_v0.4
|
gshields
| 2023-10-25T11:51:26Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-hi",
"base_model:finetune:Helsinki-NLP/opus-mt-en-hi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-15T08:47:01Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-hi
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: translate_model_error_v0.4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translate_model_error_v0.4
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-hi](https://huggingface.co/Helsinki-NLP/opus-mt-en-hi) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9509
- Bleu: 9.6073
- Gen Len: 10.2667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 8 | 5.0268 | 9.7039 | 10.5333 |
| No log | 2.0 | 16 | 4.9509 | 9.6073 | 10.2667 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
camillebri/mapsmm
|
camillebri
| 2023-10-25T11:43:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-10-25T11:43:07Z |
# Model Card for None
architecture: Conv4_FC3
multi_network: False
dropout: 0.0
latent_space_dimension: 64
latent_space_size: 2
selection_metrics: ['loss']
label: diagnosis
selection_threshold: 0.0
gpu: True
n_proc: 32
batch_size: 32
evaluation_steps: 20
seed: 0
deterministic: False
compensation: memory
transfer_path: ../../autoencoders/exp3/maps
transfer_selection_metric: loss
use_extracted_features: False
multi_cohort: False
diagnoses: ['AD', 'CN']
baseline: True
normalize: True
data_augmentation: False
sampler: random
n_splits: 5
epochs: 200
learning_rate: 1e-05
weight_decay: 0.0001
patience: 10
tolerance: 0.0
accumulation_steps: 1
optimizer: Adam
preprocessing_dict: {'preprocessing': 't1-linear', 'mode': 'roi', 'use_uncropped_image': False, 'roi_list': ['leftHippocampusBox', 'rightHippocampusBox'], 'uncropped_roi': False, 'prepare_dl': False, 'file_type': {'pattern': '*space-MNI152NLin2009cSym_desc-Crop_res-1x1x1_T1w.nii.gz', 'description': 'T1W Image registered using t1-linear and cropped (matrix size 169×208×179, 1 mm isotropic voxels)', 'needed_pipeline': 't1-linear'}}
mode: roi
network_task: classification
caps_directory: $WORK/../commun/datasets/adni/caps/caps_v2021
tsv_path: $WORK/Aramis_tools/ClinicaDL_tools/experiments_ADDL/data/ADNI/train
validation: KFoldSplit
num_networks: 2
label_code: {'AD': 0, 'CN': 1}
output_size: 2
input_size: [1, 50, 50, 50]
loss: None
|
camillebri/maps
|
camillebri
| 2023-10-25T11:38:15Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-10-25T11:38:14Z |
# Model Card for None
architecture: Conv4_FC3
multi_network: False
dropout: 0.0
latent_space_dimension: 64
latent_space_size: 2
selection_metrics: ['loss']
label: diagnosis
selection_threshold: 0.0
gpu: True
n_proc: 32
batch_size: 32
evaluation_steps: 20
seed: 0
deterministic: False
compensation: memory
transfer_path: ../../autoencoders/exp3/maps
transfer_selection_metric: loss
use_extracted_features: False
multi_cohort: False
diagnoses: ['AD', 'CN']
baseline: True
normalize: True
data_augmentation: False
sampler: random
n_splits: 5
epochs: 200
learning_rate: 1e-05
weight_decay: 0.0001
patience: 10
tolerance: 0.0
accumulation_steps: 1
optimizer: Adam
preprocessing_dict: {'preprocessing': 't1-linear', 'mode': 'roi', 'use_uncropped_image': False, 'roi_list': ['leftHippocampusBox', 'rightHippocampusBox'], 'uncropped_roi': False, 'prepare_dl': False, 'file_type': {'pattern': '*space-MNI152NLin2009cSym_desc-Crop_res-1x1x1_T1w.nii.gz', 'description': 'T1W Image registered using t1-linear and cropped (matrix size 169×208×179, 1 mm isotropic voxels)', 'needed_pipeline': 't1-linear'}}
mode: roi
network_task: classification
caps_directory: $WORK/../commun/datasets/adni/caps/caps_v2021
tsv_path: $WORK/Aramis_tools/ClinicaDL_tools/experiments_ADDL/data/ADNI/train
validation: KFoldSplit
num_networks: 2
label_code: {'AD': 0, 'CN': 1}
output_size: 2
input_size: [1, 50, 50, 50]
loss: None
|
nitinbhayana/Llama-2-7b-chat-hf-adapter-cat-v1.1
|
nitinbhayana
| 2023-10-25T11:36:22Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-10-25T11:36:03Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
kejolong/NIME2.5DSD
|
kejolong
| 2023-10-25T11:35:27Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-10-25T11:14:02Z |
---
license: creativeml-openrail-m
---
|
igpaub/ppo-SnowballTarget
|
igpaub
| 2023-10-25T11:33:55Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-10-25T09:34:00Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: igpaub/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
lingjoor/mistral-7b-Instruct-v0.1-neft-qlora
|
lingjoor
| 2023-10-25T11:31:49Z | 0 | 0 |
peft
|
[
"peft",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2023-10-21T00:05:24Z |
---
library_name: peft
license: cc-by-sa-4.0
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
|
lingjoor/qwen-14b-neft-qlora
|
lingjoor
| 2023-10-25T11:31:14Z | 0 | 0 |
peft
|
[
"peft",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2023-10-25T08:06:57Z |
---
library_name: peft
license: cc-by-sa-4.0
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
|
uer/albert-base-chinese-cluecorpussmall
|
uer
| 2023-10-25T11:21:39Z | 622 | 38 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"albert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:2212.06385",
"arxiv:1909.11942",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "中国的首都是[MASK]京"
---
# Chinese ALBERT
## Model description
This is the set of Chinese ALBERT models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658). Besides, the models could also be pre-trained by [TencentPretrain](https://github.com/Tencent/TencentPretrain) introduced in [this paper](https://arxiv.org/abs/2212.06385), which inherits UER-py to support models with parameters above one billion, and extends it to a multimodal pre-training framework.
You can download the model either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | Link |
| -------- | :-----------------------: |
| **ALBERT-Base** | [**L=12/H=768 (Base)**][base] |
| **ALBERT-Large** | [**L=24/H=1024 (Large)**][large] |
## How to use
You can use the model directly with a pipeline for text generation:
```python
>>> from transformers import BertTokenizer, AlbertForMaskedLM, FillMaskPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/albert-base-chinese-cluecorpussmall")
>>> model = AlbertForMaskedLM.from_pretrained("uer/albert-base-chinese-cluecorpussmall")
>>> unmasker = FillMaskPipeline(model, tokenizer)
>>> unmasker("中国的首都是[MASK]京。")
[
{'sequence': '中 国 的 首 都 是 北 京 。',
'score': 0.8528032898902893,
'token': 1266,
'token_str': '北'},
{'sequence': '中 国 的 首 都 是 南 京 。',
'score': 0.07667620480060577,
'token': 1298,
'token_str': '南'},
{'sequence': '中 国 的 首 都 是 东 京 。',
'score': 0.020440367981791496,
'token': 691,
'token_str': '东'},
{'sequence': '中 国 的 首 都 是 维 京 。',
'score': 0.010197942145168781,
'token': 5335,
'token_str': '维'},
{'sequence': '中 国 的 首 都 是 汴 京 。',
'score': 0.0075391442514956,
'token': 3745,
'token_str': '汴'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, AlbertModel
tokenizer = BertTokenizer.from_pretrained("uer/albert-base-chinese-cluecorpussmall")
model = AlbertModel.from_pretrained("uer/albert-base-chinese-cluecorpussmall")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFAlbertModel
tokenizer = BertTokenizer.from_pretrained("uer/albert-base-chinese-cluecorpussmall")
model = TFAlbertModel.from_pretrained("uer/albert-base-chinese-cluecorpussmall")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data.
## Training procedure
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
Taking the case of ALBERT-Base
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall_bert.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_albert_seq128_dataset.pt \
--seq_length 128 --processes_num 32 --data_processor albert
```
```
python3 pretrain.py --dataset_path cluecorpussmall_albert_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/albert/base_config.json \
--output_model_path models/cluecorpussmall_albert_base_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall_bert.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_albert_seq512_dataset.pt \
--seq_length 512 --processes_num 32 --data_processor albert
```
```
python3 pretrain.py --dataset_path cluecorpussmall_albert_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--pretrained_model_path models/cluecorpussmall_albert_base_seq128_model.bin-1000000 \
--config_path models/albert/base_config.json \
--output_model_path models/cluecorpussmall_albert_base_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_albert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_albert_base_seq512_model.bin-1000000 \
--output_model_path pytorch_model.bin
```
### BibTeX entry and citation info
```
@article{lan2019albert,
title={Albert: A lite bert for self-supervised learning of language representations},
author={Lan, Zhenzhong and Chen, Mingda and Goodman, Sebastian and Gimpel, Kevin and Sharma, Piyush and Soricut, Radu},
journal={arXiv preprint arXiv:1909.11942},
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
@article{zhao2023tencentpretrain,
title={TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities},
author={Zhao, Zhe and Li, Yudong and Hou, Cheng and Zhao, Jing and others},
journal={ACL 2023},
pages={217},
year={2023}
```
[base]:https://huggingface.co/uer/albert-base-chinese-cluecorpussmall
[large]:https://huggingface.co/uer/albert-large-chinese-cluecorpussmall
|
uer/bart-large-chinese-cluecorpussmall
|
uer
| 2023-10-25T11:20:52Z | 37 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"bart",
"text2text-generation",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:2212.06385",
"arxiv:1910.13461",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-27T12:24:39Z |
---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "作为电子[MASK]的平台,京东绝对是领先者。如今的刘强[MASK]已经是身价过[MASK]的老板。"
---
# Chinese BART
## Model description
This model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658). Besides, the models could also be pre-trained by [TencentPretrain](https://github.com/Tencent/TencentPretrain) introduced in [this paper](https://arxiv.org/abs/2212.06385), which inherits UER-py to support models with parameters above one billion, and extends it to a multimodal pre-training framework.
You can download the set of Chinese BART models either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | Link |
| ----------------- | :----------------------------: |
| **BART-Base** | [**L=6/H=768 (Base)**][base] |
| **BART-Large** | [**L=12/H=1024 (Large)**][large] |
## How to use
You can use this model directly with a pipeline for text2text generation (take the case of BART-Base):
```python
>>> from transformers import BertTokenizer, BartForConditionalGeneration, Text2TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/bart-base-chinese-cluecorpussmall")
>>> model = BartForConditionalGeneration.from_pretrained("uer/bart-base-chinese-cluecorpussmall")
>>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
>>> text2text_generator("中国的首都是[MASK]京", max_length=50, do_sample=False)
[{'generated_text': '中 国 的 首 都 是 北 京'}]
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data.
## Training procedure
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 512.
Taking the case of BART-Base
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall_bert.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_bart_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--data_processor bart
```
```
python3 pretrain.py --dataset_path cluecorpussmall_bart_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bart/base_config.json \
--output_model_path models/cluecorpussmall_bart_base_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 5e-5 --batch_size 8 \
--span_masking --span_max_length 3
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bart_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_bart_base_seq512_model.bin-1000000 \
--output_model_path pytorch_model.bin \
--layers_num 6
```
### BibTeX entry and citation info
```
@article{lewis2019bart,
title={Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension},
author={Lewis, Mike and Liu, Yinhan and Goyal, Naman and Ghazvininejad, Marjan and Mohamed, Abdelrahman and Levy, Omer and Stoyanov, Ves and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:1910.13461},
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
@article{zhao2023tencentpretrain,
title={TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities},
author={Zhao, Zhe and Li, Yudong and Hou, Cheng and Zhao, Jing and others},
journal={ACL 2023},
pages={217},
year={2023}
```
[base]:https://huggingface.co/uer/bart-base-chinese-cluecorpussmall
[large]:https://huggingface.co/uer/bart-large-chinese-cluecorpussmall
|
uer/bart-base-chinese-cluecorpussmall
|
uer
| 2023-10-25T11:20:31Z | 77 | 17 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"bart",
"text2text-generation",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:2212.06385",
"arxiv:1910.13461",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "作为电子[MASK]的平台,京东绝对是领先者。如今的刘强[MASK]已经是身价过[MASK]的老板。"
---
# Chinese BART
## Model description
This model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658). Besides, the models could also be pre-trained by [TencentPretrain](https://github.com/Tencent/TencentPretrain) introduced in [this paper](https://arxiv.org/abs/2212.06385), which inherits UER-py to support models with parameters above one billion, and extends it to a multimodal pre-training framework.
You can download the set of Chinese BART models either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | Link |
| ----------------- | :----------------------------: |
| **BART-Base** | [**L=6/H=768 (Base)**][base] |
| **BART-Large** | [**L=12/H=1024 (Large)**][large] |
## How to use
You can use this model directly with a pipeline for text2text generation (take the case of BART-Base):
```python
>>> from transformers import BertTokenizer, BartForConditionalGeneration, Text2TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/bart-base-chinese-cluecorpussmall")
>>> model = BartForConditionalGeneration.from_pretrained("uer/bart-base-chinese-cluecorpussmall")
>>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
>>> text2text_generator("中国的首都是[MASK]京", max_length=50, do_sample=False)
[{'generated_text': '中 国 的 首 都 是 北 京'}]
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data.
## Training procedure
The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 512.
Taking the case of BART-Base
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall_bert.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_bart_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--data_processor bart
```
```
python3 pretrain.py --dataset_path cluecorpussmall_bart_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bart/base_config.json \
--output_model_path models/cluecorpussmall_bart_base_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 5e-5 --batch_size 8 \
--span_masking --span_max_length 3
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bart_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_bart_base_seq512_model.bin-1000000 \
--output_model_path pytorch_model.bin \
--layers_num 6
```
### BibTeX entry and citation info
```
@article{lewis2019bart,
title={Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension},
author={Lewis, Mike and Liu, Yinhan and Goyal, Naman and Ghazvininejad, Marjan and Mohamed, Abdelrahman and Levy, Omer and Stoyanov, Ves and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:1910.13461},
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
@article{zhao2023tencentpretrain,
title={TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities},
author={Zhao, Zhe and Li, Yudong and Hou, Cheng and Zhao, Jing and others},
journal={ACL 2023},
pages={217},
year={2023}
```
[base]:https://huggingface.co/uer/bart-base-chinese-cluecorpussmall
[large]:https://huggingface.co/uer/bart-large-chinese-cluecorpussmall
|
kejolong/NIME2.5DA
|
kejolong
| 2023-10-25T11:07:19Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-10-25T10:45:27Z |
---
license: creativeml-openrail-m
---
|
jmoney54378256438905/lizpreciatior_lzlv_70b_exl2-5.25bpw
|
jmoney54378256438905
| 2023-10-25T11:06:27Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-25T10:56:19Z |
---
license: cc-by-nc-2.0
---
# lzlv_70B
## A Mythomax/MLewd_13B-style merge of selected 70B models
A multi-model merge of several LLaMA2 70B finetunes for roleplaying and creative work. The goal was to create a model that combines creativity with intelligence for an enhanced experience.
Did it work? Probably, maybe. It seemed subjectively better than each of the individual models in my tests.
GGUF 4_K_M + 5_K_M can be found here: https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf/settings
## Procedure:
Models used:
- **NousResearch/Nous-Hermes-Llama2-70b** - A great model for roleplaying, but not the best at following complex instructions.
- **Xwin-LM/Xwin-LM-7B-V0.1** - Excellent at following instructions and quite creative out of the box, so it seemed like the best available model to act as the base for the merge.
- **Doctor-Shotgun/Mythospice-70b** - The wildcard of the three. I was looking for a creative, NSFW-oriented model and came across this while digging through hf. I hadn't heard of it before and apparently no one had bothered to release a quantized version of this model. So I downloaded it and did it myself to test it. It turned out to be more or less what I was looking for as my third component, so I used it here.
A big thank you to the creators of the models above. If you look up Mythospice, you will notice that it also includes Nous-Hermes so it's technically present twice in this mix. This is apparently common practice amongst the cool kids who do 13B models so I don't think this hurts the model.
The merging process was heavily inspired by Undi95's approach in Undi95/MXLewdMini-L2-13B. To be specific, the ratios are:
Component 1: Merge of Mythospice x Xwin with SLERP gradient [0.25, 0.3, 0.5].
Component 2: Merge Xwin x Hermes with SLERP gradient [0.4, 0.3, 0.25].
Finally, both Component 1 and Component 2 were merged with SLERP using weight 0.5.
## Peformance
I tested this model for a few days before publishing it. It seems to more or less retain the instruction-following capabilities of Xwin-70B, while seeming to have adopted a lot of the creativity of the other two models.
It handled my more complex scenarios that creative models otherwise tend to struggle with quite well. At the same time, its outputs felt more creative and possibly a bit more nsfw-inclined than Xwin-70b.
So, is it better? Feels like it to me, subjectively. Is it really better? No clue, test it.
## Prompt format:
Vicuna
USER: [Prompt]
ASSISTANT:
|
GiantTreeG/german-jeopardy-longt5-large
|
GiantTreeG
| 2023-10-25T11:04:39Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"longt5",
"text2text-generation",
"question-generation",
"german",
"generated_from_trainer",
"de",
"dataset:lmqg/qg_dequad",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-10-25T09:34:25Z |
---
language:
- de
tags:
- question-generation
- german
- text2text-generation
- generated_from_trainer
datasets:
- lmqg/qg_dequad
metrics:
- bleu4
- f1
- rouge
- exact_match
model-index:
- name: german-jeopardy-longt5-large
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: lmqg/qg_dequad
type: default
args: default
metrics:
- name: BLEU-4
type: bleu4
value: 9.50
- name: F1
type: f1
value: 32.03
- name: ROUGE-1
type: rouge1
value: 32.79
- name: ROUGE-2
type: rouge2
value: 14.95
- name: ROUGE-L
type: rougel
value: 31.56
- name: ROUGE-Lsum
type: rougelsum
value: 31.57
- name: Exact Match
type: exact_match
value: 1.36
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# german-jeopardy-longt5-large-1k-64-constant
This model is a fine-tuned version of [google/long-t5-tglobal-large](https://huggingface.co/google/long-t5-tglobal-large) on the [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5907
- Brevity Penalty: 0.9367
- System Length: 19517
- Reference Length: 20793
- ROUGE-1: 32.79
- ROUGE-2: 14.95
- ROUGE-L: 31.56
- ROUGE-Lsum: 31.57
- Exact Match: 1.36
- BLEU: 9.50
- F1: 32.03
## Model description
See [google/long-t5-tglobal-large](https://huggingface.co/google/long-t5-tglobal-large) for more information about the
model architecture.
The model was trained on a single NVIDIA RTX 3090 GPU with 24GB of VRAM.
## Intended uses & limitations
This model can be used for question generation on German text.
## Training and evaluation data
See [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 7
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adafactor
- lr_scheduler_type: constant
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Counts 1 | Counts 2 | Counts 3 | Counts 4 | Totals 1 | Totals 2 | Totals 3 | Totals 4 | Precisions 1 | Precisions 2 | Precisions 3 | Precisions 4 | Brevity Penalty | System Length | Reference Length | ROUGE-1 | ROUGE-2 | ROUGE-L | ROUGE-Lsum | Exact Match | BLEU | Mean Generated Length | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:----------------:|:-------:|:-------:|:-------:|:----------:|:-----------:|:-------:|:---------------------:|:------:|
| 6.5987 | 1.0 | 145 | 5.0696 | 3804 | 134 | 2 | 0 | 22913 | 20709 | 18505 | 16301 | 16.6019 | 0.6471 | 0.0108 | 0.0031 | 1.0 | 22913 | 21250 | 0.0783 | 0.007 | 0.0769 | 0.0768 | 0.0 | 0.1374 | 16.2899 | 0.0814 |
| 4.7443 | 2.0 | 291 | 4.2270 | 4022 | 188 | 20 | 0 | 17366 | 15162 | 12958 | 10754 | 23.1602 | 1.2399 | 0.1543 | 0.0046 | 0.7996 | 17366 | 21250 | 0.1028 | 0.012 | 0.0991 | 0.099 | 0.0 | 0.303 | 12.9038 | 0.1073 |
| 4.1412 | 3.0 | 436 | 3.7838 | 3723 | 187 | 26 | 2 | 16515 | 14311 | 12107 | 9903 | 22.5431 | 1.3067 | 0.2148 | 0.0202 | 0.7507 | 16515 | 21250 | 0.0899 | 0.0124 | 0.0886 | 0.0884 | 0.0 | 0.4488 | 12.4769 | 0.0938 |
| 3.6791 | 4.0 | 582 | 3.4246 | 4576 | 549 | 134 | 26 | 21871 | 19667 | 17463 | 15259 | 20.9227 | 2.7915 | 0.7673 | 0.1704 | 1.0 | 21871 | 21250 | 0.1259 | 0.0296 | 0.1204 | 0.1201 | 0.0 | 1.6623 | 14.5676 | 0.1323 |
| 3.3523 | 5.0 | 727 | 3.1723 | 4900 | 796 | 210 | 41 | 19389 | 17185 | 14981 | 12777 | 25.2721 | 4.6319 | 1.4018 | 0.3209 | 0.9085 | 19389 | 21250 | 0.1542 | 0.0449 | 0.1486 | 0.1484 | 0.0005 | 2.4472 | 14.3943 | 0.1585 |
| 3.0161 | 6.0 | 873 | 2.9268 | 5633 | 1182 | 390 | 111 | 19045 | 16841 | 14637 | 12433 | 29.5773 | 7.0186 | 2.6645 | 0.8928 | 0.8907 | 19045 | 21250 | 0.204 | 0.069 | 0.196 | 0.1961 | 0.0045 | 4.1987 | 14.5789 | 0.2074 |
| 2.7639 | 7.0 | 1018 | 2.7601 | 6100 | 1461 | 499 | 165 | 17924 | 15720 | 13516 | 11312 | 34.0326 | 9.2939 | 3.6919 | 1.4586 | 0.8306 | 17924 | 21250 | 0.2409 | 0.0885 | 0.2332 | 0.2331 | 0.0073 | 5.3362 | 13.8553 | 0.2431 |
| 2.5036 | 8.0 | 1164 | 2.5729 | 6765 | 1845 | 701 | 273 | 20179 | 17975 | 15771 | 13567 | 33.525 | 10.2643 | 4.4449 | 2.0122 | 0.9483 | 20179 | 21250 | 0.2682 | 0.1079 | 0.2589 | 0.259 | 0.0059 | 7.0633 | 15.7232 | 0.2689 |
| 2.307 | 8.99 | 1309 | 2.4637 | 7018 | 2047 | 826 | 348 | 19054 | 16850 | 14646 | 12442 | 36.8322 | 12.1484 | 5.6398 | 2.797 | 0.8911 | 19054 | 21250 | 0.2907 | 0.1218 | 0.2799 | 0.2798 | 0.0095 | 8.1681 | 14.8076 | 0.2907 |
| 2.1012 | 10.0 | 1455 | 2.3614 | 7147 | 2127 | 883 | 389 | 18473 | 16269 | 14065 | 11861 | 38.6889 | 13.0739 | 6.278 | 3.2797 | 0.8604 | 18473 | 21250 | 0.3003 | 0.1275 | 0.289 | 0.2888 | 0.0118 | 8.6921 | 14.2736 | 0.3008 |
| 1.9538 | 10.99 | 1600 | 2.2980 | 7481 | 2339 | 997 | 459 | 18524 | 16320 | 14116 | 11912 | 40.3854 | 14.3321 | 7.0629 | 3.8533 | 0.8632 | 18524 | 21250 | 0.3192 | 0.1423 | 0.3064 | 0.3068 | 0.0127 | 9.67 | 14.3757 | 0.3167 |
| 1.7909 | 12.0 | 1746 | 2.2389 | 7675 | 2546 | 1144 | 546 | 18849 | 16645 | 14441 | 12237 | 40.7183 | 15.2959 | 7.9219 | 4.4619 | 0.8804 | 18849 | 21250 | 0.3299 | 0.1528 | 0.3174 | 0.3175 | 0.015 | 10.724 | 14.583 | 0.3279 |
| 1.6691 | 12.99 | 1891 | 2.1813 | 7858 | 2635 | 1179 | 576 | 18643 | 16439 | 14235 | 12031 | 42.1499 | 16.029 | 8.2824 | 4.7876 | 0.8695 | 18643 | 21250 | 0.344 | 0.1626 | 0.33 | 0.33 | 0.0163 | 11.1241 | 14.3848 | 0.3395 |
| 1.5361 | 14.0 | 2037 | 2.1546 | 8016 | 2729 | 1249 | 606 | 18754 | 16550 | 14346 | 12142 | 42.7429 | 16.4894 | 8.7063 | 4.9909 | 0.8754 | 18754 | 21250 | 0.3494 | 0.1664 | 0.3349 | 0.3351 | 0.0163 | 11.5803 | 14.564 | 0.3462 |
| 1.4365 | 14.99 | 2182 | 2.1358 | 8112 | 2839 | 1316 | 647 | 18390 | 16186 | 13982 | 11778 | 44.1109 | 17.5398 | 9.4121 | 5.4933 | 0.856 | 18390 | 21250 | 0.3581 | 0.1761 | 0.3448 | 0.3448 | 0.02 | 12.1055 | 14.1656 | 0.3538 |
| 1.3263 | 16.0 | 2328 | 2.1190 | 8381 | 2990 | 1430 | 731 | 18892 | 16688 | 14484 | 12280 | 44.3627 | 17.9171 | 9.873 | 5.9528 | 0.8827 | 18892 | 21250 | 0.3681 | 0.1831 | 0.3532 | 0.3534 | 0.0209 | 12.9765 | 14.5445 | 0.363 |
| 1.2329 | 17.0 | 2474 | 2.1202 | 8449 | 3101 | 1520 | 786 | 18612 | 16408 | 14204 | 12000 | 45.3954 | 18.8993 | 10.7012 | 6.55 | 0.8678 | 18612 | 21250 | 0.3743 | 0.1901 | 0.3603 | 0.3603 | 0.0227 | 13.5903 | 14.1779 | 0.3692 |
| 1.1557 | 18.0 | 2619 | 2.1282 | 8406 | 3154 | 1558 | 804 | 17958 | 15754 | 13550 | 11346 | 46.8092 | 20.0203 | 11.4982 | 7.0862 | 0.8325 | 17958 | 21250 | 0.3761 | 0.194 | 0.3633 | 0.3636 | 0.0277 | 13.8388 | 13.677 | 0.371 |
| 1.0658 | 19.0 | 2765 | 2.1232 | 8614 | 3241 | 1610 | 839 | 18955 | 16751 | 14547 | 12343 | 45.4445 | 19.3481 | 11.0676 | 6.7974 | 0.886 | 18955 | 21250 | 0.3803 | 0.196 | 0.3654 | 0.3656 | 0.0272 | 14.2084 | 14.3816 | 0.3749 |
| 0.9944 | 19.93 | 2900 | 2.1203 | 8658 | 3273 | 1625 | 859 | 18853 | 16649 | 14445 | 12241 | 45.9237 | 19.6588 | 11.2496 | 7.0174 | 0.8806 | 18853 | 21250 | 0.3833 | 0.1977 | 0.369 | 0.3691 | 0.0268 | 14.3883 | 14.2881 | 0.3775 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
nakhyeonn/llama-2-ko-qlora-prompt_1024_new
|
nakhyeonn
| 2023-10-25T10:58:36Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"region:us"
] | null | 2023-10-25T10:44:51Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
joey00072/mistral-finetuned-samsum
|
joey00072
| 2023-10-25T10:57:35Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:finetune:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2023-10-25T10:11:46Z |
---
license: apache-2.0
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
tags:
- generated_from_trainer
model-index:
- name: mistral-finetuned-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-finetuned-samsum
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
yemoncad/peripheral-blood-cell
|
yemoncad
| 2023-10-25T10:46:06Z | 24 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-25T10:39:55Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### peripheral-blood-cell Dreambooth model trained by yemoncad with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
.jpg)
|
DanielaBreitman/21cmEMU
|
DanielaBreitman
| 2023-10-25T10:15:54Z | 0 | 0 |
tf-keras
|
[
"tf-keras",
"cosmology",
"emulator",
"physics",
"21cmFAST",
"en",
"license:mit",
"region:us"
] | null | 2023-05-02T09:24:48Z |
---
license: mit
language:
- en
tags:
- cosmology
- emulator
- physics
- 21cmFAST
---
# 21cmEMU
[][pypi_]
[][status]
[][python version]
[][license]
[][read the docs]
[][tests]
[][codecov]
[][pre-commit]
[][black]
[pypi_]: https://pypi.org/project/py21cmemu/
[status]: https://pypi.org/project/py21cmemu/
[python version]: https://pypi.org/project/py21cmemu
[read the docs]: https://21cmemu.readthedocs.io/
[tests]: https://github.com/21cmFAST/21cmEMU/actions?workflow=Tests
[codecov]: https://app.codecov.io/gh/21cmFAST/21cmEMU
[pre-commit]: https://github.com/pre-commit/pre-commit
[black]: https://github.com/psf/black
## Features
- Uses Tensorflow to emulate the following summary statistics: 21-cm power spectrum, 21-cm global brightness temperature, IGM spin temperature, and neutral fraction.
- Uses 21cmFAST to analytically calculate the UV luminosity functions and the Thomson optical depth to the CMB.
## Requirements
- Tensorflow >= 2.6
## Installation
You can install _py21cmEMU_ via [pip] from [PyPI]:
```console
$ pip install py21cmemu
```
## Usage
Please see the [Command-line Reference] for details.
## Contributing
Contributions are very welcome.
To learn more, see the [Contributor Guide].
## License
Distributed under the terms of the [MIT license][license],
_21cmEMU_ is free and open source software.
## Issues
If you encounter any problems,
please [file an issue] along with a detailed description.
## Credits
This project was generated from [@cjolowicz]'s [Hypermodern Python Cookiecutter] template.
[@cjolowicz]: https://github.com/cjolowicz
[pypi]: https://pypi.org/
[hypermodern python cookiecutter]: https://github.com/cjolowicz/cookiecutter-hypermodern-python
[file an issue]: https://github.com/21cmFAST/21cmEMU/issues
[pip]: https://pip.pypa.io/
<!-- github-only -->
[license]: https://github.com/21cmFAST/21cmEMU/blob/main/LICENSE
[contributor guide]: https://github.com/21cmFAST/21cmEMU/blob/main/CONTRIBUTING.md
[command-line reference]: https://21cmEMU.readthedocs.io/en/latest/usage.html
|
Puspaksahu/my_awesome_arith_model
|
Puspaksahu
| 2023-10-25T09:53:17Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large",
"base_model:finetune:facebook/bart-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-10-25T09:37:04Z |
---
license: apache-2.0
base_model: facebook/bart-large
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_arith_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_arith_model
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3147
- Rouge1: 0.9917
- Rouge2: 0.9123
- Rougel: 0.9594
- Rougelsum: 0.9593
- Gen Len: 8.9019
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 31 | 0.5797 | 0.977 | 0.7972 | 0.9038 | 0.9035 | 8.6517 |
| No log | 2.0 | 62 | 0.6579 | 0.9792 | 0.8002 | 0.9055 | 0.9055 | 10.6333 |
| No log | 3.0 | 93 | 0.4883 | 0.982 | 0.8346 | 0.9181 | 0.9182 | 8.7263 |
| No log | 4.0 | 124 | 0.4298 | 0.9813 | 0.8255 | 0.9164 | 0.9163 | 8.8856 |
| No log | 5.0 | 155 | 0.4968 | 0.9849 | 0.8408 | 0.9228 | 0.9225 | 10.5873 |
| No log | 6.0 | 186 | 0.4186 | 0.9805 | 0.8441 | 0.9253 | 0.9253 | 10.1277 |
| No log | 7.0 | 217 | 0.3675 | 0.9837 | 0.8633 | 0.9371 | 0.9373 | 9.2308 |
| No log | 8.0 | 248 | 0.3372 | 0.9909 | 0.9004 | 0.955 | 0.9546 | 9.4566 |
| No log | 9.0 | 279 | 0.3526 | 0.991 | 0.8999 | 0.953 | 0.9529 | 8.7896 |
| No log | 10.0 | 310 | 0.3147 | 0.9917 | 0.9123 | 0.9594 | 0.9593 | 8.9019 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.0+cu117
- Datasets 2.14.6
- Tokenizers 0.14.1
|
irishwerr/taxi-v3
|
irishwerr
| 2023-10-25T09:52:47Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-25T09:52:44Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="irishwerr/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
s3nh/stabilityai-japanese-stablelm-base-gamma-7b-GGUF
|
s3nh
| 2023-10-25T09:36:46Z | 7 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-25T07:25:39Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
RIOLITE/products_matching_aumet_fine_tune_2023-10-25
|
RIOLITE
| 2023-10-25T09:28:00Z | 12 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-10-25T09:27:28Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 8 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
latestissue/rwkv-4-world-one-state-ultrachat-cot-65k-ggml-quantized
|
latestissue
| 2023-10-25T09:17:43Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-10-23T13:46:44Z |
---
license: apache-2.0
---
Source: https://huggingface.co/xiaol/RWKV-4-world-one-state-ultrachat-COT-65k
|
BenneF/xlm-roberta-base-finetuned-panx-de-fr
|
BenneF
| 2023-10-25T09:15:03Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-10-25T08:25:12Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1641
- F1: 0.8557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2825 | 1.0 | 715 | 0.1915 | 0.8162 |
| 0.1456 | 2.0 | 1430 | 0.1678 | 0.8393 |
| 0.0942 | 3.0 | 2145 | 0.1641 | 0.8557 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.12.1+cu113
- Datasets 2.14.5
- Tokenizers 0.14.1
|
mhomaid/estate-chat-v0
|
mhomaid
| 2023-10-25T09:10:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-10-25T08:55:46Z |
Real Estates Chat Model
---
license: mit
---
|
jpxkqx/ddpm_mediterranean_reanalysis_tas
|
jpxkqx
| 2023-10-25T09:10:31Z | 6 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"climate",
"super-resolution",
"image-to-image",
"es",
"en",
"license:apache-2.0",
"region:us"
] |
image-to-image
| 2023-10-19T13:54:21Z |
---
license: apache-2.0
language:
- es
- en
library_name: diffusers
pipeline_tag: image-to-image
tags:
- climate
- diffusers
- super-resolution
---
# Europe Reanalysis Super Resolution
The aim of the project is to create a Machine learning (ML) model that can generate high-resolution regional reanalysis data (similar to the one produced by CERRA) by downscaling global reanalysis data from ERA5.
This will be accomplished by using state-of-the-art Deep Learning (DL) techniques like U-Net, conditional GAN, and diffusion models (among others). Additionally, an ingestion module will be implemented to assess the possible benefit of using CERRA pseudo-observations as extra predictors. Once the model is designed and trained, a detailed validation framework takes the place.
It combines classical deterministic error metrics with in-depth validations, including time series, maps, spatio-temporal correlations, and computer vision metrics, disaggregated by months, seasons, and geographical regions, to evaluate the effectiveness of the model in reducing errors and representing physical processes. This level of granularity allows for a more comprehensive and accurate assessment, which is critical for ensuring that the model is effective in practice.
Moreover, tools for interpretability of DL models can be used to understand the inner workings and decision-making processes of these complex structures by analyzing the activations of different neurons and the importance of different features in the input data.
This work is funded by [Code for Earth 2023](https://codeforearth.ecmwf.int/) initiative.
The denoise model is released in Apache 2.0, making it usable without restrictions anywhere.
# Table of Contents
- [Model Card for Europe Reanalysis Super Resolution](#model-card-for--model_id-)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [Model Description](#model-description)
- [Training Details](#training-details)
- [Denoise Network](#denoise-network)
- [Noise Scheduler](#noise-scheduler)
- [Training Data](#training-data)
- [Normalization techniques](#normalization-techniques)
- [Results](#results)
- [Normalization](#normalization)
- [Noise Schedulers](#schedulers)
- [Model sizes](#model-sizes)
- [Next Steps](#next-steps)
- [Compute Infrastructure](#compute-infrastructure)
- [Hardware](#hardware)
- [Software](#software)
- [Authors](#authors)
# Model Details
This model corresponds to a Denoise Neural Network trained with instance normalization over bicubic interpolated inputs.
We have implemented a [diffusers.UNet2DModel](https://huggingface.co/docs/diffusers/api/models/unet2d#diffusers.UNet2DModel) for a Denoising Diffusion Probabilistic Model, with different schedulers: [DDPMScheduler](https://huggingface.co/docs/diffusers/api/schedulers/ddpm), [DDIM](https://huggingface.co/docs/diffusers/api/schedulers/ddim) and [LMSDiscreteScheduler](https://huggingface.co/docs/diffusers/api/schedulers/lms_discrete).

## Model Description
<!-- Provide a longer summary of what this model is/does. -->
We present the results of using Diffusion models (DM) for down-scaling (from 0.25º to 0.05º) regional reanalysis grids in the mediterranean area.
- **Developed by:** A team of Predictia Intelligent Data Solutions S.L.
- **Model type:** Vision model
- **Language(s) (NLP):** en, es
- **License:** Apache-2.0
- **Resources for more information:** More information needed
- [GitHub Repo](https://github.com/ECMWFCode4Earth/DeepR)
## Denoise Network
For the Denoise network, we have only explored one architecture, [diffusers.UNet2DModel](https://huggingface.co/docs/diffusers/api/models/unet2d#diffusers.UNet2DModel), with differente model sizes, ranging from 3 blocks of 64, 128 and 192 out channels to the default configuration of 4 blocks of 224, 448, 672 and 896 out channels.
This network always takes:
- 2 channels as inputs corresponding to the noisy image at a timestep \\(t\\), and the bicubic upsampled ERA5 field.
- The timestep \\(t\\), which is projected to an embedding that is added to the input.
## Noise Scheduler
Different schedulers have been considered.
- [DDPMScheduler](https://huggingface.co/docs/diffusers/api/schedulers/ddpm)
- [DDIM](https://huggingface.co/docs/diffusers/api/schedulers/ddim)
- [LMSDiscreteScheduler](https://huggingface.co/docs/diffusers/api/schedulers/lms_discrete)
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The dataset used is a composition of the ERA5 and CERRA reanalysis.
The spatial coverage of the input grids (ERA5) is defined below, and corresponds to a 2D array of dimensions (60, 42):
```
longitude: [-8.35, 6.6]
latitude: [46.45, 35.50]
```
On the other hand, the target high-resolution grid (CERRA) correspond to a 2D matrix of dimmension (240, 160):
```
longitude: [-6.85, 5.1]
latitude: [44.95, 37]
```
The data samples used for training corresponds to the period from 1981 and 2013 (both included) and from 2014 to 2017 for per-epoch validation.
## Normalization techniques
All of these normalization techniques have been explored during and after [ECMWF Code 4 Earth](https://codeforearth.ecmwf.int/).
**With monthly climatologies**. This corresponds to compute the historical climatologies during the training period for each region (pixel or domain), and normalize with respect to that. In our case, the monthly climatologies are considered, but it could also be disaggregated by time of day, for example.
- _Pixel-wise_: In this case, the climatology is computed for each pixel of the meteorological field. Then, each pixel is standardized with its own climatology statistics.
- _Domain-wise_: Here, the climatology statistics are computed for the whole domain of interest. After computing the statistics, 2 normalizing schemas are possible:
- Independent: it refers to normalizing ERA5 and CERRA independently, each with its own statistics.
- Dependent: it refers to use only the climatology statistics from ERA5 to standardize both ERA5 and CERRA simultaneously.
The dependent approach is not feasible for the pixel-wise schema, because there is no direct correspondence between the input and output patch pixels. If we would be interested in doing so, there is the possibility to compute the statistics over the bicubic downsampled ERA5, and use those statistics for normalizing CERRA.
**Without past information**. This corresponds to normalizing each sample independently by the mean and standard deviation of the ERA5 field. This is known in the ML community as instance normalization. Here, we have to use only the distribution statistics from the inputs as the outputs will not be available during inference, but 2 different variations are possible in our use case:
- Use the statistics of the input ERA5. Recall that it covers a wider area than CERRA.
- Use the statistics of the bicubic downscaled ERA5, which represents the same area as CERRA.
The difference between these two approaches is not about calculating the statistics on the downscaled or source ERA5. The difference is that the input patch encompasses a larger area, and therefore a more different distribution. Thus, the second approach seems more correct as the downscaled area distribution will be more similar to the output distribution.
# Results
The results of this model are <ins>NOT</ins> considered <ins>ACCEPTABLE</ins>, since they are not comparable with bicubic interpolation, a simple method which is also considered as input to the model. Therefore, although more complex tests are performed, such as including other covariates (e.g. time of day), they are not detailed here because their real effect on the performance of the model cannot be determined.
In this repository, we present the best performing Diffusion Model, which is trained with the scheduler specified at [scheduler_config.json](scheduler_config.json) with the parameters shown in [config.json](config.json) and instance normalization over the downsampled ERA5 inputs.
Below, the sample predicition of the 64M parameters Diffusion Model, with 1000 inference timesteps, at 00H of the January 1, 2018 compared with the CERRA reanalysis.

### Normalization
- The pixel wise normalization does not make sense in this problem setup, as erases the spatial pattern and the DDPM is not able to learn anything.
- In contrast, when scaling with domain statistics the DDPM reproduce high resolution details that are not present in the input ERA5, but it fails to match the current high resolution field. The DDPM samples have appearance of CERRA-like fields, but when computing the MSE over a period is x3 larger than the bicubic interpolation.
- When scaling with domain statistics, using the same statistics for input and output represents slightly better the mean values of the predictions, but all fail to reproduce the variance of the field as seen in the sample.
- The instance normalization directly on the Denoise Network inputs (downsampled ERA5) reproduce slightly better the spatial pattern than others. The error metrics are more homogeneous spatially. For example, the spatial pattern of mountainous areas is very well reproduced.
### Schedulers
There is no significant difference in trainining time, or sampling quality (at maximum capabilities). The difference between schedulers may arise during influence, when DDIM or LMSDiscrete may have higher quality samples with fewer inference steps, and consequently lower computational cost.
As satisfying performance is not reaching at maximum capabilities (inference steps = number training timesteps), therea has not been any research of the schedulers efficiency during sampling, which by the scientific literature may be sufficient with 40 samples (1/25 of the current inference timesteps).
### Model sizes
This is strongly related to training time. Not only because of the time it takes to run the forward & backward process of the network, but also because of the limited memory available to load the samples, and then the need for more (smaller) batches to complete each epoch.
With the limited computational resources available, and the dataset considered, the tests carried out have indicated that there is an improvement when going from tens of output channels to a few hundred, obtaining networks of between 20 and 100 million parameters, but that it is not possible to reach the default size due to failures during training (i.e. gradient explosion, etc...).
# Next steps
As this factors (model size, normalization and noise schedulers) have been extensively explored, it is necessary to move the research efforts to other aspects, as the followings:
- Train a VAE, to work with Latent DM. This can be though as a learnable normalization with the additional advantage of reducing the sample size, and therefore the computational cost.
- Train a larger denoise network. To this aim, it may be necessary a larger VM and/or training with more samples. For example, it may be beneficial to start learning with random patches (during the firsts epochs), and then fine-tune with the current domain the last epochs.
- Another DM flavours, like Score Based DM.
- Try new architecture availables in [diffusers](https://huggingface.co/docs/diffusers/api/models/overview).
Based on scientific literature for other problems like Super Resolution in Computer Vision, where they work with larger samples -3 channels rather than 1, and more pixels- better results have to be achievable with this architecture type and DM flavour.
To tackle the most limiting factor, we think the best options are to explore options 1 and 2.
# Compute Infrastructure
The use of GPUs in deep learning projects significantly accelerates model training and inference, leading to substantial reductions in computation time and making it feasible to tackle complex tasks and large datasets with efficiency.
The generosity and collaboration of our partners are instrumental to the success of this projects, significantly contributing to our research and development endeavors.
- **AI4EOSC**: AI4EOSC stands for "Artificial Intelligence for the European Open Science Cloud." The European Open Science Cloud (EOSC) is a European Union initiative that aims to create a federated environment of research data and services. AI4EOSC is a specific project or initiative within the EOSC framework that focuses on the integration and application of artificial intelligence (AI) technologies in the context of open science.
- **European Weather Cloud**: The European Weather Cloud is the cloud-based collaboration platform for meteorological application development and operations in Europe. Services provided range from delivery of weather forecast data and products to the provision of computing and storage resources, support and expert advice.
## Hardware
For our project, we have deployed two virtual machines (VMs), each featuring a dedicated Graphics Processing Unit (GPU). One VM is equipped with a 16GB GPU, while the other boasts a more substantial 20GB GPU. This resource configuration allows us to efficiently manage a wide range of computing tasks, from data processing to model training, and ultimately drives the successful execution of our project.
## Software
The code used to train and evaluate this model is freely available through its GitHub Repository [ECMWFCode4Earth/DeepR](https://github.com/ECMWFCode4Earth/DeepR) hosted in the ECWMF Code 4 Earth organization.
# Authors
<!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
- Mario Santa Cruz. Predictia Intelligent Data Solutions S.L.
- Antonio Pérez. Predictia Intelligent Data Solutions S.L.
- Javier Díez. Predictia Intelligent Data Solutions S.L.
|
aiplanet/effi-7b
|
aiplanet
| 2023-10-25T09:07:14Z | 104 | 4 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"base_model:huggyllama/llama-7b",
"base_model:finetune:huggyllama/llama-7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T10:46:07Z |
---
license: apache-2.0
library_name: transformers
metrics:
- accuracy
pipeline_tag: text-generation
base_model: huggyllama/llama-7b
---
effi 7b is a 7 billion parameter model built by AI Planet. Inspired by llama, we've built fine-tuned version of llama7b with qlora. The training procedure and framework versions are provided below along with model weighths.
## Model Details
### Model Description
This model has been fine-tuned on Chain of Thought datasets, which has context from mixed sources with corresponding rationale. The final finetuned Large Language Model(LLM) have shown enhanced capabilities of solving novel tasks by providing a reasoning.
- **Developed by:** AI Planet
- **Model type:** Casual Decoder only
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** Llama-2-7b-chat-hf
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
tensorrt/stable-diffusion-2-1
|
tensorrt
| 2023-10-25T08:57:50Z | 11 | 1 |
diffusers
|
[
"diffusers",
"onnx",
"stable-diffusion",
"text-to-image",
"arxiv:2112.10752",
"arxiv:2202.00512",
"arxiv:1910.09700",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-25T08:51:57Z |
---
license: openrail++
tags:
- stable-diffusion
- text-to-image
pinned: true
---
# Stable Diffusion v2-1 Model Card
This model card focuses on the model associated with the Stable Diffusion v2-1 model, codebase avaliable [here](https://github.com/Stability-AI/stablediffusion).
This `stable-diffusion-2-1` model is fine-tuned from [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) (`768-v-ema.ckpt`) with an additional 55k steps on the same dataset (with `punsafe=0.1`), and then fine-tuned for another 155k extra steps with `punsafe=0.98`.
- Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `v2-1_768-ema-pruned.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/v2-1_768-ema-pruned.ckpt).
- Use it with 🧨 [`diffusers`](#examples)
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)).
- **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## Examples
Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion 2 in a simple and efficient manner.
```bash
pip install --upgrade git+https://github.com/huggingface/diffusers.git transformers accelerate scipy
```
Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to EulerDiscreteScheduler):
```python
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
model_id = "stabilityai/stable-diffusion-2"
# Use the Euler scheduler here instead
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, revision="fp16", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt, height=768, width=768).images[0]
image.save("astronaut_rides_horse.png")
```
**Notes**:
- Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance)
- If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed)
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is originally taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), was used for Stable Diffusion v1, but applies in the same way to Stable Diffusion v2_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a subset of the large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section).
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
Stable Diffusion v2 mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-5B and subsets (details below). The training data is further filtered using LAION's NSFW detector, with a "p_unsafe" score of 0.1 (conservative). For more details, please refer to LAION-5B's [NeurIPS 2022](https://openreview.net/forum?id=M3Y74vmsMcY) paper and reviewer discussions on the topic.
**Training Procedure**
Stable Diffusion v2 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through the OpenCLIP-ViT/H text-encoder.
- The output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We also use the so-called _v-objective_, see https://arxiv.org/abs/2202.00512.
We currently provide the following checkpoints:
- `512-base-ema.ckpt`: 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`.
850k steps at resolution `512x512` on the same dataset with resolution `>= 512x512`.
- `768-v-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for 150k steps using a [v-objective](https://arxiv.org/abs/2202.00512) on the same dataset. Resumed for another 140k steps on a `768x768` subset of our dataset.
- `512-depth-ema.ckpt`: Resumed from `512-base-ema.ckpt` and finetuned for 200k steps. Added an extra input channel to process the (relative) depth prediction produced by [MiDaS](https://github.com/isl-org/MiDaS) (`dpt_hybrid`) which is used as an additional conditioning.
The additional input channels of the U-Net which process this extra information were zero-initialized.
- `512-inpainting-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for another 200k steps. Follows the mask-generation strategy presented in [LAMA](https://github.com/saic-mdal/lama) which, in combination with the latent VAE representations of the masked image, are used as an additional conditioning.
The additional input channels of the U-Net which process this extra information were zero-initialized. The same strategy was used to train the [1.5-inpainting checkpoint](https://github.com/saic-mdal/lama).
- `x4-upscaling-ema.ckpt`: Trained for 1.25M steps on a 10M subset of LAION containing images `>2048x2048`. The model was trained on crops of size `512x512` and is a text-guided [latent upscaling diffusion model](https://arxiv.org/abs/2112.10752).
In addition to the textual input, it receives a `noise_level` as an input parameter, which can be used to add noise to the low-resolution input according to a [predefined diffusion schedule](configs/stable-diffusion/x4-upscaling.yaml).
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 1
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 steps DDIM sampling steps show the relative improvements of the checkpoints:

Evaluated using 50 DDIM steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 200000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 15000 kg CO2 eq.
## Citation
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
*This model card was written by: Robin Rombach, Patrick Esser and David Ha and is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
|
scaleflex/clip-vit-base-patch32-openvino
|
scaleflex
| 2023-10-25T08:56:39Z | 0 | 1 | null |
[
"en",
"license:mit",
"region:us"
] | null | 2023-10-24T18:01:59Z |
---
license: mit
language:
- en
---
# CLIP ViT-B/32 in OpenVINO™ format
## Original model details
The CLIP model was developed by OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.
## Model type
The model uses a ViT-B/32 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss.
## OpenVINO™ optimization
To increase the efficiency of the model during inference, we utilized the OpenVINO™ toolkit for optimization. The table below showcases the inference time improvements achieved with OpenVINO™ compared to the original PyTorch implementation:
| Metric | PyTorch Inference Time (sec) | OpenVINO™ Inference Time (sec) | Similarity |
|:-------------------|-------------------------------:|---------------------------------:|-------------:|
| mean | 0.52 | 0.46 | 1 |
| std | 0.11 | 0.09 | 0 |
| min | 0.39 | 0.36 | 1 |
| max | 0.70 | 0.62 | 1 |
OpenVINO offers a 1.12x speedup in inference time compared to PyTorch. It was measured on same image in 100 iterations on Intel(R) Xeon(R) CPU @ 2.20GHz (CPU family: 6, Model: 79).
The results indicate that the OpenVINO™ optimization provides a consistent improvement in inference time while maintaining the same level of accuracy (as indicated by the similarity score).
## Usage
You can utilize this optimized model for faster inferences in environments where time is a critical factor. Ensure you have the necessary libraries and dependencies installed to leverage the usage of OpenVINO™.
```bash
pip install transformers huggingface_hub openvino-dev
```
Then use it for inference:
```python
import os
import numpy as np
from PIL import Image
from huggingface_hub import snapshot_download
from openvino.runtime import Core
from scipy.special import softmax
from transformers import CLIPProcessor
# Download the OV model
ov_path = snapshot_download(repo_id="scaleflex/clip-vit-base-patch32-openvino")
# Load preprocessor for model input
processor = CLIPProcessor.from_pretrained("scaleflex/clip-vit-base-patch32-openvino")
ov_model_xml = os.path.join(ov_path, "clip-vit-base-patch32.xml")
image = Image.open("face.png") # download this example image: http://sample.li/face.png
input_labels = [
"businessman",
"dog playing in the garden",
"beautiful woman",
"big city",
"lake in the mountain",
]
text_descriptions = [f"This is a photo of a {label}" for label in input_labels]
inputs = processor(
text=text_descriptions, images=[image], return_tensors="pt", padding=True
)
# Create OpenVINO core object instance
core = Core()
ov_model = core.read_model(model=ov_model_xml)
# Compile model for loading on device
compiled_model = core.compile_model(ov_model)
# Obtain output tensor for getting predictions
logits_per_image_out = compiled_model.output(0)
# Run inference on preprocessed data and get image-text similarity score
ov_logits_per_image = compiled_model(dict(inputs))[logits_per_image_out]
# Perform softmax on score
probs = softmax(ov_logits_per_image, axis=1)
max_index = np.argmax(probs)
# Use the index to get the corresponding label
label_with_max_prob = input_labels[max_index]
print(
f"The label with the highest probability is: '{label_with_max_prob}' with a probability of {probs[0][max_index] * 100:.2f}%"
)
# The label with the highest probability is: 'beautiful woman' with a probability of 97.87%
```
|
xinyu1205/recognize_anything_model
|
xinyu1205
| 2023-10-25T08:56:24Z | 0 | 43 | null |
[
"image tagging, image captioning",
"image-to-text",
"en",
"arxiv:2306.03514",
"arxiv:2303.05657",
"license:mit",
"region:us"
] |
image-to-text
| 2023-06-14T03:30:22Z |
---
license: mit
language:
- en
pipeline_tag: image-to-text
tags:
- image tagging, image captioning
---
# Recognize Anything & Tag2Text
Model card for <a href="https://recognize-anything.github.io/">Recognize Anything: A Strong Image Tagging Model </a> and <a href="https://tag2text.github.io/">Tag2Text: Guiding Vision-Language Model via Image Tagging</a>.
**Recognition and localization are two foundation computer vision tasks.**
- **The Segment Anything Model (SAM)** excels in **localization capabilities**, while it falls short when it comes to **recognition tasks**.
- **The Recognize Anything Model (RAM) and Tag2Text** exhibits **exceptional recognition abilities**, in terms of **both accuracy and scope**.
-
|  |
|:--:|
| <b> Pull figure from recognize-anything official repo | Image source: https://recognize-anything.github.io/ </b>|
## TL;DR
Authors from the [paper](https://arxiv.org/abs/2306.03514) write in the abstract:
*We present the Recognize Anything Model~(RAM): a strong foundation model for image tagging. RAM makes a substantial step for large models in computer vision, demonstrating the zero-shot ability to recognize any common category with high accuracy. By leveraging large-scale image-text pairs for training instead of manual annotations, RAM introduces a new paradigm for image tagging. We evaluate the tagging capability of RAM on numerous benchmarks and observe an impressive zero-shot performance, which significantly outperforms CLIP and BLIP. Remarkably, RAM even surpasses fully supervised models and exhibits a competitive performance compared with the Google tagging API.*
## BibTex and citation info
```
@article{zhang2023recognize,
title={Recognize Anything: A Strong Image Tagging Model},
author={Zhang, Youcai and Huang, Xinyu and Ma, Jinyu and Li, Zhaoyang and Luo, Zhaochuan and Xie, Yanchun and Qin, Yuzhuo and Luo, Tong and Li, Yaqian and Liu, Shilong and others},
journal={arXiv preprint arXiv:2306.03514},
year={2023}
}
@article{huang2023tag2text,
title={Tag2Text: Guiding Vision-Language Model via Image Tagging},
author={Huang, Xinyu and Zhang, Youcai and Ma, Jinyu and Tian, Weiwei and Feng, Rui and Zhang, Yuejie and Li, Yaqian and Guo, Yandong and Zhang, Lei},
journal={arXiv preprint arXiv:2303.05657},
year={2023}
}
```
|
ClaireOzzz/test2
|
ClaireOzzz
| 2023-10-25T08:56:09Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-10-25T08:56:08Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: prrr
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - test2
These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "prrr" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
|
xinyu1205/recognize-anything-plus-model
|
xinyu1205
| 2023-10-25T08:55:42Z | 0 | 38 | null |
[
"image recognition",
"zero-shot-image-classification",
"en",
"arxiv:2306.03514",
"arxiv:2303.05657",
"license:apache-2.0",
"region:us"
] |
zero-shot-image-classification
| 2023-10-21T07:49:45Z |
---
license: apache-2.0
language:
- en
pipeline_tag: zero-shot-image-classification
tags:
- image recognition
---
# Recognize Anything & Tag2Text
Model card for <a href="https://recognize-anything.github.io/">Recognize Anything Plus Model (RAM++) </a>.
RAM++ is the next generation of RAM, which can recognize any category with high accuracy, including both predefined common categories and diverse open-set categories.
RAM++ outperforms existing SOTA image fundamental reocngition models in terms of common tag categorie, uncommon tag categories, and human-object interaction phrase.
## TL;DR
Authors from the [paper](https://arxiv.org/abs/2306.03514) write in the abstract:
*We introduce the Recognize Anything Plus Model (RAM++), a fundamental image recognition model with strong open-set recognition capabilities, by injecting semantic concepts into image tagging training framework. Previous approaches are either image tagging models constrained by limited semantics, or vision-language models with shallow interaction for suboptimal performance in multi-tag recognition. In contrast, RAM++ integrates image-text alignment and image-tagging within a unified fine-grained interaction framework based on image-tags-text triplets. This design enables RAM++ not only excel in identifying predefined categories, but also significantly augment the recognition ability in open-set categories. Moreover, RAM++ employs large language models~(LLMs) to generate diverse visual tag descriptions, pioneering the integration of LLM's knowledge into image tagging training. This approach empowers RAM++ to integrate visual description concepts for open-set recognition during inference. Evaluations on comprehensive image recognition benchmarks demonstrate RAM++ exceeds existing state-of-the-art (SOTA) fundamental image recognition models on most aspects. *
## BibTex and citation info
```
@article{zhang2023recognize,
title={Recognize Anything: A Strong Image Tagging Model},
author={Zhang, Youcai and Huang, Xinyu and Ma, Jinyu and Li, Zhaoyang and Luo, Zhaochuan and Xie, Yanchun and Qin, Yuzhuo and Luo, Tong and Li, Yaqian and Liu, Shilong and others},
journal={arXiv preprint arXiv:2306.03514},
year={2023}
}
@article{huang2023tag2text,
title={Tag2Text: Guiding Vision-Language Model via Image Tagging},
author={Huang, Xinyu and Zhang, Youcai and Ma, Jinyu and Tian, Weiwei and Feng, Rui and Zhang, Yuejie and Li, Yaqian and Guo, Yandong and Zhang, Lei},
journal={arXiv preprint arXiv:2303.05657},
year={2023}
}
```
|
blur0b0t/mh_one_api
|
blur0b0t
| 2023-10-25T08:54:29Z | 0 | 1 |
peft
|
[
"peft",
"question-answering",
"region:us"
] |
question-answering
| 2023-10-15T10:26:22Z |
---
library_name: peft
pipeline_tag: question-answering
---
## Model description
Flan-t5-xl finetuned model on dataset provided by intel|machinehack for QnA.
## Intended uses & limitations
More information needed
### Training hyperparameters
The following hyperparameters were used during training:
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 1 \
--do_train \
--learning_rate 1.0e-5 \
--warmup_ratio 0.03 \
--weight_decay 0.0 \
--num_train_epochs 1 \
### Training results
{
"epoch": 1.08,
"train_loss": 0.0,
"train_runtime": 0.017,
"train_samples": 66611,
"train_samples_per_second": 3929048.542,
"train_steps_per_second": 1964553.764
}
### Framework versions
- PEFT 0.5.0
|
khurshedz/Ghg
|
khurshedz
| 2023-10-25T08:50:49Z | 0 | 0 | null |
[
"art",
"image-to-image",
"dataset:fka/awesome-chatgpt-prompts",
"license:apache-2.0",
"region:us"
] |
image-to-image
| 2023-10-25T08:46:34Z |
---
license: apache-2.0
datasets:
- fka/awesome-chatgpt-prompts
metrics:
- bertscore
pipeline_tag: image-to-image
tags:
- art
---
|
jordanhilado/sd-1-5-sketch-lora
|
jordanhilado
| 2023-10-25T08:46:13Z | 2 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-10-18T20:51:10Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - jordanhilado/sd-1-5-sketch-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the zoheb/sketch-scene dataset. You can find some example images in the following.




|
anandNakat/bart_math_solver
|
anandNakat
| 2023-10-25T08:43:03Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:anandNakat/bart_math_solver",
"base_model:finetune:anandNakat/bart_math_solver",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-10-25T06:11:47Z |
---
license: apache-2.0
base_model: anandNakat/bart_math_solver
tags:
- generated_from_trainer
model-index:
- name: bart_math_solver
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart_math_solver
This model is a fine-tuned version of [anandNakat/bart_math_solver](https://huggingface.co/anandNakat/bart_math_solver) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 221 | 0.6829 |
| No log | 2.0 | 442 | 0.4829 |
| 0.2857 | 3.0 | 663 | 1.2298 |
| 0.2857 | 4.0 | 884 | 0.4878 |
| 0.2399 | 5.0 | 1105 | 1.1191 |
| 0.2399 | 6.0 | 1326 | 0.6093 |
| 0.1495 | 7.0 | 1547 | 0.5928 |
| 0.1495 | 8.0 | 1768 | 0.5647 |
| 0.1495 | 9.0 | 1989 | 0.6989 |
| 0.0935 | 10.0 | 2210 | 0.7365 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
fmurimi/bert-finetuned-ner
|
fmurimi
| 2023-10-25T08:29:25Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-10-25T08:17:29Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: fmurimi/bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# fmurimi/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0270
- Validation Loss: 0.0540
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1742 | 0.0651 | 0 |
| 0.0456 | 0.0583 | 1 |
| 0.0270 | 0.0540 | 2 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
kaitchup/Mistral-7B-v0.1-SFT-ultrachat
|
kaitchup
| 2023-10-25T08:19:33Z | 0 | 1 |
peft
|
[
"peft",
"dataset:kaitchup/ultrachat-100k-flattened",
"region:us"
] | null | 2023-10-25T08:18:18Z |
---
library_name: peft
datasets:
- kaitchup/ultrachat-100k-flattened
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
botcon/squad_tuned_luke_peft_full
|
botcon
| 2023-10-25T08:14:50Z | 92 | 0 |
transformers
|
[
"transformers",
"pytorch",
"luke",
"question-answering",
"generated_from_trainer",
"base_model:studio-ousia/luke-base",
"base_model:finetune:studio-ousia/luke-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-10-25T08:13:45Z |
---
license: apache-2.0
base_model: studio-ousia/luke-base
tags:
- generated_from_trainer
model-index:
- name: peft_tuned_luke_try2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# peft_tuned_luke_try2
This model is a fine-tuned version of [studio-ousia/luke-base](https://huggingface.co/studio-ousia/luke-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
lltala/ner-2-xlm-roberta-base
|
lltala
| 2023-10-25T07:52:18Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:lltala/e-ner-roberta-base",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-10-25T07:39:49Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- lltala/e-ner-roberta-base
model-index:
- name: ner-2-xlm-roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner-2-xlm-roberta-base
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the lltala/e-ner-roberta-base dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0690
- Loc Precision: 0.6234
- Loc Recall: 0.6316
- Loc F1: 0.6275
- Loc Number: 76
- Org Precision: 0.8116
- Org Recall: 0.6744
- Org F1: 0.7366
- Org Number: 562
- Per Precision: 0.9737
- Per Recall: 1.0
- Per F1: 0.9867
- Per Number: 74
- Overall Precision: 0.8081
- Overall Recall: 0.7037
- Overall F1: 0.7523
- Overall Accuracy: 0.9864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
joseluhf11/disease_encoder_v2
|
joseluhf11
| 2023-10-25T07:42:59Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-10-25T07:32:16Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 406 with parameters:
```
{'batch_size': 32}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 406,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
hyo37009/cppe5_use_data_finetuning
|
hyo37009
| 2023-10-25T07:33:43Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-10-24T11:51:22Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: cppe5_use_data_finetuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cppe5_use_data_finetuning
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
chojo12/use_data_finetuning
|
chojo12
| 2023-10-25T07:25:17Z | 189 | 0 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-10-25T06:40:39Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: use_data_finetuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# use_data_finetuning
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
astrid01052/guanaco-3-noisy
|
astrid01052
| 2023-10-25T07:19:18Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-25T07:14:19Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
Reyden/my_awesome_asr_mind_model
|
Reyden
| 2023-10-25T07:17:20Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-10-25T06:32:57Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: my_awesome_asr_mind_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_asr_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.1710
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 58.3347 | 1.0 | 5 | 30.3906 | 1.2065 |
| 49.6382 | 2.0 | 10 | 30.3906 | 1.2065 |
| 47.2021 | 3.0 | 15 | 26.6334 | 1.0109 |
| 28.9906 | 4.0 | 20 | 22.2268 | 1.0 |
| 28.247 | 5.0 | 25 | 18.9242 | 1.0 |
| 27.9882 | 6.0 | 30 | 15.4867 | 1.0 |
| 22.125 | 7.0 | 35 | 12.5733 | 1.0 |
| 18.5329 | 8.0 | 40 | 10.6346 | 1.0 |
| 14.7859 | 9.0 | 45 | 9.1798 | 1.0 |
| 14.7127 | 10.0 | 50 | 8.1710 | 1.0 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
|
syabusyabu0141/tc_el_chains
|
syabusyabu0141
| 2023-10-25T07:13:43Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"electra",
"token-classification",
"generated_from_keras_callback",
"base_model:google/electra-base-discriminator",
"base_model:finetune:google/electra-base-discriminator",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-10-25T03:28:41Z |
---
license: apache-2.0
base_model: google/electra-base-discriminator
tags:
- generated_from_keras_callback
model-index:
- name: syabusyabu0141/test7
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# syabusyabu0141/test7
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0448
- Validation Loss: 0.0252
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0448 | 0.0252 | 0 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Floyd93/grammar_correction
|
Floyd93
| 2023-10-25T07:13:20Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-10-25T07:12:40Z |
---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: sample_data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sample_data
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9981
- Rouge1: 11.7326
- Rouge2: 5.4413
- Rougel: 10.8133
- Rougelsum: 11.3071
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.7628 | 5.0 | 500 | 1.9981 | 11.7326 | 5.4413 | 10.8133 | 11.3071 | 19.0 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Yntec/GoldenEra
|
Yntec
| 2023-10-25T07:08:14Z | 1,436 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"Anime",
"Retro",
"3D",
"Pixar",
"Elldreth",
"DucHaiten",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-12T04:20:10Z |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Anime
- Retro
- 3D
- Pixar
- Elldreth
- DucHaiten
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
# Golden Era
A mix of Elldreth's Retro and DucHaiten-GoldenLife, two of my favorite models!
Comparison:

(click for larger)
Sample and prompt:

Female mini cute style, sitting IN SOFA in gaming room, A wholesome animation key shot at computer monitor, pixar and disney animation, studio ghibli, anime key art by ROSSDRAWS and Clay Mann, style of maple story, maple story girl, soft lighting, soft shade, chibi
Original Pages:
https://tensor.art/models/628276277415133426 (GoldenLife)
https://huggingface.co/Yntec/ElldrethsRetroMix_Diffusers
# GoldenLife-Retro
A mix of DucHaiten-GoldenLife and Elldreth's Retro, two of my favorite models!

# Recipes (SuperMerger Train Difference)
- Add Difference
Model A:
DucHaitenGoldenLife
Model B:
DucHaitenGoldenLife
Model C:
v1-5-pruned-fp16-no-ema (https://huggingface.co/Yntec/DreamLikeRemix/resolve/main/v1-5-pruned-fp16-no-ema.safetensors)
Output:
GoldenLifeEssense
- Weight Sum 0.70
Model A:
GoldenLifeEssense
Model B:
ElldrethsRetroMix
Output:
RetroLifeAlpha
- Weight Sum MBW 0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,0
Model A:
RetroLifeAlpha
Model B:
ElldrethsRetroMix
Output:
Retro-GoldenLife
- Weight Sum MBW 0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0
Model A:
Retro-GoldenLife
Model B:
DucHaiten-GoldenLife
Output:
GoldenEra
- Fp16-no-ema
Output:
GoldenEra-mini
- Add Difference
Model A:
ElldrethsRetroMix
Model B:
ElldrethsRetroMix
Model C:
v1-5-pruned-fp16-no-ema (https://huggingface.co/Yntec/DreamLikeRemix/resolve/main/v1-5-pruned-fp16-no-ema.safetensors)
Output:
RetroEssense
- Weight Sum 0.70
Model A:
RetroEssense
Model B:
DucHaiten-GoldenLife
Output:
GoldenRetroAlpha
- Weight Sum MBW - 0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1
Model A:
GoldenRetroAlpha
Model B:
DucHaiten-GoldenLife
Output:
GoldenRetroOmega
- Weight Sum MBW - 0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,0
Model A:
GoldenRetroOmega
Model B:
ElldrethsRetroMix
Output:
GoldenLife-Retro
|
syafiqfaray/byt5-small-indocollex-informal-to-formal-wordformation
|
syafiqfaray
| 2023-10-25T06:51:53Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/byt5-small",
"base_model:finetune:google/byt5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-10-25T04:27:56Z |
---
license: apache-2.0
base_model: google/byt5-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: byt5-finetuned-indocollex-informal-to-formal
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ByT5 Finetuned IndoCollex Informal to Formal with Word Formation Tag
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on [IndoCollex dataset](https://github.com/haryoa/indo-collex) on informal-formal transformation.
It achieves the following results on the evaluation set:
- Loss: 0.1665
- Cer: 0.1952
- Wer: 0.481
- Word Acc: 0.519
- Gen Len: 7.6914
On test set, it achieves following results :
- CER: 0.2152
- WER: 0.5125
- Word Accuracy: 0.4875
## Model description
Inputs are constructed like this `tag transformasi kata: %s. kata: %s`
For example : `tag transformasi kata: sound-alter. kata: sampe`
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer | Wer | Word Acc | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:--------:|:-------:|
| No log | 1.0 | 93 | 33.2385 | 2.2445 | 2.4 | -1.4 | 19.0 |
| No log | 2.0 | 186 | 16.9556 | 2.3667 | 1.081 | -0.081 | 19.0 |
| No log | 3.0 | 279 | 5.1125 | 1.3005 | 1.0 | 0.0 | 6.1886 |
| No log | 4.0 | 372 | 3.0517 | 0.8676 | 0.9857 | 0.0143 | 8.5029 |
| No log | 5.0 | 465 | 1.8607 | 0.4058 | 0.981 | 0.019 | 6.5486 |
| 17.3258 | 6.0 | 558 | 0.7701 | 0.3769 | 0.9762 | 0.0238 | 6.3486 |
| 17.3258 | 7.0 | 651 | 0.4911 | 0.3328 | 0.9619 | 0.0381 | 6.48 |
| 17.3258 | 8.0 | 744 | 0.4172 | 0.3183 | 0.9476 | 0.0524 | 6.6971 |
| 17.3258 | 9.0 | 837 | 0.3590 | 0.3014 | 0.9095 | 0.0905 | 6.8114 |
| 17.3258 | 10.0 | 930 | 0.3303 | 0.3039 | 0.8762 | 0.1238 | 7.2686 |
| 0.696 | 11.0 | 1023 | 0.3030 | 0.2912 | 0.8286 | 0.1714 | 7.2971 |
| 0.696 | 12.0 | 1116 | 0.2969 | 0.3048 | 0.8429 | 0.1571 | 7.4514 |
| 0.696 | 13.0 | 1209 | 0.2799 | 0.298 | 0.8238 | 0.1762 | 7.4286 |
| 0.696 | 14.0 | 1302 | 0.2656 | 0.2946 | 0.8 | 0.2 | 7.4743 |
| 0.696 | 15.0 | 1395 | 0.2524 | 0.2555 | 0.7619 | 0.2381 | 7.2457 |
| 0.696 | 16.0 | 1488 | 0.2427 | 0.2564 | 0.7286 | 0.2714 | 7.4 |
| 0.3225 | 17.0 | 1581 | 0.2317 | 0.2309 | 0.7095 | 0.2905 | 7.2343 |
| 0.3225 | 18.0 | 1674 | 0.2196 | 0.2258 | 0.6857 | 0.3143 | 7.2971 |
| 0.3225 | 19.0 | 1767 | 0.2162 | 0.2334 | 0.7095 | 0.2905 | 7.24 |
| 0.3225 | 20.0 | 1860 | 0.2094 | 0.2224 | 0.7 | 0.3 | 7.2571 |
| 0.3225 | 21.0 | 1953 | 0.2050 | 0.219 | 0.6714 | 0.3286 | 7.28 |
| 0.2482 | 22.0 | 2046 | 0.2006 | 0.2148 | 0.6571 | 0.3429 | 7.3314 |
| 0.2482 | 23.0 | 2139 | 0.1985 | 0.225 | 0.6619 | 0.3381 | 7.3543 |
| 0.2482 | 24.0 | 2232 | 0.1962 | 0.2156 | 0.6429 | 0.3571 | 7.4114 |
| 0.2482 | 25.0 | 2325 | 0.1927 | 0.2173 | 0.6381 | 0.3619 | 7.3429 |
| 0.2482 | 26.0 | 2418 | 0.1943 | 0.2199 | 0.6524 | 0.3476 | 7.3943 |
| 0.2055 | 27.0 | 2511 | 0.1940 | 0.2122 | 0.6381 | 0.3619 | 7.2571 |
| 0.2055 | 28.0 | 2604 | 0.1869 | 0.2046 | 0.6143 | 0.3857 | 7.3314 |
| 0.2055 | 29.0 | 2697 | 0.1849 | 0.1995 | 0.6 | 0.4 | 7.3543 |
| 0.2055 | 30.0 | 2790 | 0.1833 | 0.2114 | 0.6048 | 0.3952 | 7.3543 |
| 0.2055 | 31.0 | 2883 | 0.1812 | 0.2054 | 0.5952 | 0.4048 | 7.4457 |
| 0.2055 | 32.0 | 2976 | 0.1772 | 0.208 | 0.5905 | 0.4095 | 7.52 |
| 0.1792 | 33.0 | 3069 | 0.1768 | 0.2046 | 0.5905 | 0.4095 | 7.4743 |
| 0.1792 | 34.0 | 3162 | 0.1756 | 0.2114 | 0.581 | 0.419 | 7.4857 |
| 0.1792 | 35.0 | 3255 | 0.1735 | 0.2165 | 0.5714 | 0.4286 | 7.52 |
| 0.1792 | 36.0 | 3348 | 0.1713 | 0.2165 | 0.5714 | 0.4286 | 7.6114 |
| 0.1792 | 37.0 | 3441 | 0.1726 | 0.2037 | 0.5619 | 0.4381 | 7.4914 |
| 0.1591 | 38.0 | 3534 | 0.1663 | 0.2063 | 0.5619 | 0.4381 | 7.4629 |
| 0.1591 | 39.0 | 3627 | 0.1664 | 0.1995 | 0.5524 | 0.4476 | 7.44 |
| 0.1591 | 40.0 | 3720 | 0.1661 | 0.1986 | 0.5381 | 0.4619 | 7.4457 |
| 0.1591 | 41.0 | 3813 | 0.1658 | 0.1995 | 0.5333 | 0.4667 | 7.5429 |
| 0.1591 | 42.0 | 3906 | 0.1646 | 0.191 | 0.519 | 0.481 | 7.48 |
| 0.1591 | 43.0 | 3999 | 0.1619 | 0.1995 | 0.5381 | 0.4619 | 7.5543 |
| 0.1427 | 44.0 | 4092 | 0.1641 | 0.1969 | 0.5333 | 0.4667 | 7.4229 |
| 0.1427 | 45.0 | 4185 | 0.1672 | 0.1944 | 0.5286 | 0.4714 | 7.4743 |
| 0.1427 | 46.0 | 4278 | 0.1645 | 0.1952 | 0.5381 | 0.4619 | 7.5143 |
| 0.1427 | 47.0 | 4371 | 0.1667 | 0.1952 | 0.5381 | 0.4619 | 7.4686 |
| 0.1427 | 48.0 | 4464 | 0.1663 | 0.1961 | 0.5143 | 0.4857 | 7.5543 |
| 0.1322 | 49.0 | 4557 | 0.1640 | 0.1986 | 0.5333 | 0.4667 | 7.44 |
| 0.1322 | 50.0 | 4650 | 0.1646 | 0.1935 | 0.4905 | 0.5095 | 7.4857 |
| 0.1322 | 51.0 | 4743 | 0.1644 | 0.1927 | 0.5143 | 0.4857 | 7.4971 |
| 0.1322 | 52.0 | 4836 | 0.1637 | 0.2148 | 0.5381 | 0.4619 | 7.5257 |
| 0.1322 | 53.0 | 4929 | 0.1668 | 0.1978 | 0.5 | 0.5 | 7.5371 |
| 0.1227 | 54.0 | 5022 | 0.1650 | 0.1995 | 0.519 | 0.481 | 7.5257 |
| 0.1227 | 55.0 | 5115 | 0.1661 | 0.1952 | 0.4952 | 0.5048 | 7.6 |
| 0.1227 | 56.0 | 5208 | 0.1642 | 0.2012 | 0.5095 | 0.4905 | 7.6057 |
| 0.1227 | 57.0 | 5301 | 0.1667 | 0.2037 | 0.5048 | 0.4952 | 7.64 |
| 0.1227 | 58.0 | 5394 | 0.1650 | 0.1893 | 0.4857 | 0.5143 | 7.52 |
| 0.1227 | 59.0 | 5487 | 0.1665 | 0.1944 | 0.481 | 0.519 | 7.5657 |
| 0.1165 | 60.0 | 5580 | 0.1652 | 0.1902 | 0.4905 | 0.5095 | 7.5429 |
| 0.1165 | 61.0 | 5673 | 0.1649 | 0.1885 | 0.4857 | 0.5143 | 7.5543 |
| 0.1165 | 62.0 | 5766 | 0.1679 | 0.1893 | 0.4905 | 0.5095 | 7.5371 |
| 0.1165 | 63.0 | 5859 | 0.1670 | 0.1935 | 0.4905 | 0.5095 | 7.56 |
| 0.1165 | 64.0 | 5952 | 0.1667 | 0.1944 | 0.4905 | 0.5095 | 7.5714 |
| 0.1074 | 65.0 | 6045 | 0.1676 | 0.1978 | 0.4952 | 0.5048 | 7.5886 |
| 0.1074 | 66.0 | 6138 | 0.1653 | 0.2012 | 0.481 | 0.519 | 7.5771 |
| 0.1074 | 67.0 | 6231 | 0.1667 | 0.1961 | 0.4857 | 0.5143 | 7.5943 |
| 0.1074 | 68.0 | 6324 | 0.1666 | 0.1927 | 0.4762 | 0.5238 | 7.5886 |
| 0.1074 | 69.0 | 6417 | 0.1671 | 0.2003 | 0.4952 | 0.5048 | 7.52 |
| 0.1038 | 70.0 | 6510 | 0.1648 | 0.2046 | 0.4857 | 0.5143 | 7.6 |
| 0.1038 | 71.0 | 6603 | 0.1653 | 0.1935 | 0.481 | 0.519 | 7.6514 |
| 0.1038 | 72.0 | 6696 | 0.1663 | 0.1952 | 0.4762 | 0.5238 | 7.6171 |
| 0.1038 | 73.0 | 6789 | 0.1655 | 0.1995 | 0.481 | 0.519 | 7.6971 |
| 0.1038 | 74.0 | 6882 | 0.1653 | 0.1969 | 0.4762 | 0.5238 | 7.6857 |
| 0.1038 | 75.0 | 6975 | 0.1661 | 0.1995 | 0.4762 | 0.5238 | 7.7143 |
| 0.1004 | 76.0 | 7068 | 0.1649 | 0.2003 | 0.4762 | 0.5238 | 7.7143 |
| 0.1004 | 77.0 | 7161 | 0.1657 | 0.1969 | 0.4762 | 0.5238 | 7.6971 |
| 0.1004 | 78.0 | 7254 | 0.1652 | 0.1986 | 0.481 | 0.519 | 7.7029 |
| 0.1004 | 79.0 | 7347 | 0.1669 | 0.1969 | 0.481 | 0.519 | 7.68 |
| 0.1004 | 80.0 | 7440 | 0.1665 | 0.2003 | 0.4762 | 0.5238 | 7.68 |
| 0.0966 | 81.0 | 7533 | 0.1656 | 0.2012 | 0.481 | 0.519 | 7.7143 |
| 0.0966 | 82.0 | 7626 | 0.1660 | 0.1995 | 0.481 | 0.519 | 7.7143 |
| 0.0966 | 83.0 | 7719 | 0.1639 | 0.1978 | 0.4762 | 0.5238 | 7.7029 |
| 0.0966 | 84.0 | 7812 | 0.1654 | 0.1986 | 0.481 | 0.519 | 7.7086 |
| 0.0966 | 85.0 | 7905 | 0.1661 | 0.1995 | 0.481 | 0.519 | 7.7143 |
| 0.0966 | 86.0 | 7998 | 0.1662 | 0.1986 | 0.481 | 0.519 | 7.7143 |
| 0.0958 | 87.0 | 8091 | 0.1660 | 0.1969 | 0.4762 | 0.5238 | 7.7143 |
| 0.0958 | 88.0 | 8184 | 0.1659 | 0.1944 | 0.481 | 0.519 | 7.6914 |
| 0.0958 | 89.0 | 8277 | 0.1656 | 0.1952 | 0.481 | 0.519 | 7.6914 |
| 0.0958 | 90.0 | 8370 | 0.1658 | 0.1952 | 0.481 | 0.519 | 7.6914 |
| 0.0958 | 91.0 | 8463 | 0.1661 | 0.1952 | 0.481 | 0.519 | 7.6914 |
| 0.0944 | 92.0 | 8556 | 0.1661 | 0.1961 | 0.481 | 0.519 | 7.6971 |
| 0.0944 | 93.0 | 8649 | 0.1662 | 0.1944 | 0.481 | 0.519 | 7.6914 |
| 0.0944 | 94.0 | 8742 | 0.1657 | 0.1961 | 0.481 | 0.519 | 7.7029 |
| 0.0944 | 95.0 | 8835 | 0.1663 | 0.1944 | 0.481 | 0.519 | 7.6914 |
| 0.0944 | 96.0 | 8928 | 0.1664 | 0.1944 | 0.481 | 0.519 | 7.6914 |
| 0.0923 | 97.0 | 9021 | 0.1663 | 0.1952 | 0.481 | 0.519 | 7.6914 |
| 0.0923 | 98.0 | 9114 | 0.1666 | 0.1952 | 0.481 | 0.519 | 7.6914 |
| 0.0923 | 99.0 | 9207 | 0.1664 | 0.1952 | 0.481 | 0.519 | 7.6914 |
| 0.0923 | 100.0 | 9300 | 0.1665 | 0.1952 | 0.481 | 0.519 | 7.6914 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
syafiqfaray/byt5-base-indocollex-informal-to-formal
|
syafiqfaray
| 2023-10-25T06:48:08Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/byt5-base",
"base_model:finetune:google/byt5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-10-25T04:11:02Z |
---
license: apache-2.0
base_model: google/byt5-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: byt5-finetuned-indocollex-informal-to-formal
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Byt5-base Finetuned IndoCollex Informal to Formal
This model is a fine-tuned version of [google/byt5-base](https://huggingface.co/google/byt5-base) on on [IndoCollex dataset](https://github.com/haryoa/indo-collex) on informal-formal transformation.
It achieves the following results on the evaluation set:
- Loss: 0.2191
- Cer: 0.208
- Wer: 0.5286
- Word Acc: 0.4714
- Gen Len: 7.7486
On test set, it achieves following results :
- CER: 0.2589
- WER: 0.575
- Word Accuracy: 0.425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer | Wer | Word Acc | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:--------:|:-------:|
| No log | 1.0 | 93 | 15.6714 | 2.1749 | 2.0857 | -1.0857 | 19.0 |
| No log | 2.0 | 186 | 12.9426 | 2.1265 | 1.6619 | -0.6619 | 19.0 |
| No log | 3.0 | 279 | 9.8664 | 1.9525 | 1.0524 | -0.0524 | 17.8343 |
| No log | 4.0 | 372 | 4.9424 | 1.0051 | 1.0 | 0.0 | 0.1086 |
| No log | 5.0 | 465 | 3.8691 | 0.5688 | 0.9667 | 0.0333 | 6.8171 |
| 11.8639 | 6.0 | 558 | 3.0433 | 0.5161 | 0.9619 | 0.0381 | 6.1486 |
| 11.8639 | 7.0 | 651 | 1.1874 | 0.5374 | 0.9619 | 0.0381 | 5.9143 |
| 11.8639 | 8.0 | 744 | 0.5482 | 0.5756 | 0.9714 | 0.0286 | 7.2457 |
| 11.8639 | 9.0 | 837 | 0.4749 | 0.5195 | 0.9476 | 0.0524 | 7.1771 |
| 11.8639 | 10.0 | 930 | 0.3678 | 0.3591 | 0.8952 | 0.1048 | 7.0286 |
| 1.4703 | 11.0 | 1023 | 0.3154 | 0.2988 | 0.8524 | 0.1476 | 7.0343 |
| 1.4703 | 12.0 | 1116 | 0.2753 | 0.2895 | 0.819 | 0.181 | 7.3314 |
| 1.4703 | 13.0 | 1209 | 0.2561 | 0.2674 | 0.7667 | 0.2333 | 7.1543 |
| 1.4703 | 14.0 | 1302 | 0.2386 | 0.2581 | 0.7667 | 0.2333 | 7.3657 |
| 1.4703 | 15.0 | 1395 | 0.2249 | 0.2453 | 0.7429 | 0.2571 | 7.4629 |
| 1.4703 | 16.0 | 1488 | 0.2163 | 0.2428 | 0.7286 | 0.2714 | 7.4514 |
| 0.3131 | 17.0 | 1581 | 0.2093 | 0.2538 | 0.7286 | 0.2714 | 7.4743 |
| 0.3131 | 18.0 | 1674 | 0.2027 | 0.2436 | 0.7143 | 0.2857 | 7.6057 |
| 0.3131 | 19.0 | 1767 | 0.1986 | 0.2453 | 0.7 | 0.3 | 7.6229 |
| 0.3131 | 20.0 | 1860 | 0.1923 | 0.2453 | 0.6952 | 0.3048 | 7.6571 |
| 0.3131 | 21.0 | 1953 | 0.1907 | 0.2411 | 0.6667 | 0.3333 | 7.5829 |
| 0.2093 | 22.0 | 2046 | 0.1876 | 0.2368 | 0.6571 | 0.3429 | 7.6171 |
| 0.2093 | 23.0 | 2139 | 0.1889 | 0.2351 | 0.6476 | 0.3524 | 7.5543 |
| 0.2093 | 24.0 | 2232 | 0.1864 | 0.2334 | 0.6429 | 0.3571 | 7.6 |
| 0.2093 | 25.0 | 2325 | 0.1802 | 0.2284 | 0.6238 | 0.3762 | 7.6 |
| 0.2093 | 26.0 | 2418 | 0.1786 | 0.2292 | 0.6381 | 0.3619 | 7.6 |
| 0.1641 | 27.0 | 2511 | 0.1760 | 0.2182 | 0.6286 | 0.3714 | 7.56 |
| 0.1641 | 28.0 | 2604 | 0.1769 | 0.2199 | 0.6143 | 0.3857 | 7.6057 |
| 0.1641 | 29.0 | 2697 | 0.1735 | 0.225 | 0.619 | 0.381 | 7.6114 |
| 0.1641 | 30.0 | 2790 | 0.1764 | 0.2207 | 0.6 | 0.4 | 7.6057 |
| 0.1641 | 31.0 | 2883 | 0.1727 | 0.208 | 0.5952 | 0.4048 | 7.5943 |
| 0.1641 | 32.0 | 2976 | 0.1735 | 0.208 | 0.5905 | 0.4095 | 7.64 |
| 0.1336 | 33.0 | 3069 | 0.1694 | 0.2063 | 0.581 | 0.419 | 7.6743 |
| 0.1336 | 34.0 | 3162 | 0.1728 | 0.2114 | 0.5857 | 0.4143 | 7.6914 |
| 0.1336 | 35.0 | 3255 | 0.1722 | 0.2207 | 0.5857 | 0.4143 | 7.64 |
| 0.1336 | 36.0 | 3348 | 0.1703 | 0.2224 | 0.6 | 0.4 | 7.6857 |
| 0.1336 | 37.0 | 3441 | 0.1715 | 0.2173 | 0.581 | 0.419 | 7.6629 |
| 0.1115 | 38.0 | 3534 | 0.1735 | 0.2148 | 0.5762 | 0.4238 | 7.68 |
| 0.1115 | 39.0 | 3627 | 0.1715 | 0.2088 | 0.5762 | 0.4238 | 7.7143 |
| 0.1115 | 40.0 | 3720 | 0.1754 | 0.2131 | 0.5714 | 0.4286 | 7.72 |
| 0.1115 | 41.0 | 3813 | 0.1757 | 0.2122 | 0.5524 | 0.4476 | 7.76 |
| 0.1115 | 42.0 | 3906 | 0.1725 | 0.2122 | 0.5571 | 0.4429 | 7.7714 |
| 0.1115 | 43.0 | 3999 | 0.1724 | 0.2173 | 0.5619 | 0.4381 | 7.76 |
| 0.0954 | 44.0 | 4092 | 0.1799 | 0.2071 | 0.5429 | 0.4571 | 7.7371 |
| 0.0954 | 45.0 | 4185 | 0.1771 | 0.2097 | 0.5524 | 0.4476 | 7.7657 |
| 0.0954 | 46.0 | 4278 | 0.1780 | 0.2063 | 0.5476 | 0.4524 | 7.72 |
| 0.0954 | 47.0 | 4371 | 0.1791 | 0.2088 | 0.5381 | 0.4619 | 7.7714 |
| 0.0954 | 48.0 | 4464 | 0.1799 | 0.2105 | 0.5429 | 0.4571 | 7.7486 |
| 0.0814 | 49.0 | 4557 | 0.1799 | 0.2054 | 0.5333 | 0.4667 | 7.7657 |
| 0.0814 | 50.0 | 4650 | 0.1830 | 0.2037 | 0.5381 | 0.4619 | 7.6971 |
| 0.0814 | 51.0 | 4743 | 0.1824 | 0.2088 | 0.5429 | 0.4571 | 7.76 |
| 0.0814 | 52.0 | 4836 | 0.1846 | 0.2037 | 0.5286 | 0.4714 | 7.7771 |
| 0.0814 | 53.0 | 4929 | 0.1837 | 0.2046 | 0.5286 | 0.4714 | 7.7429 |
| 0.073 | 54.0 | 5022 | 0.1816 | 0.2054 | 0.5333 | 0.4667 | 7.7943 |
| 0.073 | 55.0 | 5115 | 0.1825 | 0.2029 | 0.519 | 0.481 | 7.7771 |
| 0.073 | 56.0 | 5208 | 0.1870 | 0.208 | 0.5286 | 0.4714 | 7.7829 |
| 0.073 | 57.0 | 5301 | 0.1870 | 0.2105 | 0.5381 | 0.4619 | 7.7829 |
| 0.073 | 58.0 | 5394 | 0.1932 | 0.2054 | 0.5286 | 0.4714 | 7.7543 |
| 0.073 | 59.0 | 5487 | 0.1880 | 0.2046 | 0.5143 | 0.4857 | 7.7886 |
| 0.0653 | 60.0 | 5580 | 0.1890 | 0.2071 | 0.519 | 0.481 | 7.7714 |
| 0.0653 | 61.0 | 5673 | 0.1952 | 0.2105 | 0.5286 | 0.4714 | 7.7886 |
| 0.0653 | 62.0 | 5766 | 0.1940 | 0.2054 | 0.5238 | 0.4762 | 7.8 |
| 0.0653 | 63.0 | 5859 | 0.1948 | 0.2063 | 0.5143 | 0.4857 | 7.7829 |
| 0.0653 | 64.0 | 5952 | 0.1972 | 0.208 | 0.5238 | 0.4762 | 7.7943 |
| 0.0582 | 65.0 | 6045 | 0.1965 | 0.2046 | 0.5238 | 0.4762 | 7.7543 |
| 0.0582 | 66.0 | 6138 | 0.1968 | 0.2046 | 0.5095 | 0.4905 | 7.7657 |
| 0.0582 | 67.0 | 6231 | 0.1981 | 0.2071 | 0.519 | 0.481 | 7.7886 |
| 0.0582 | 68.0 | 6324 | 0.1977 | 0.2063 | 0.519 | 0.481 | 7.7771 |
| 0.0582 | 69.0 | 6417 | 0.2018 | 0.2054 | 0.5238 | 0.4762 | 7.7657 |
| 0.0522 | 70.0 | 6510 | 0.1992 | 0.2088 | 0.5286 | 0.4714 | 7.7657 |
| 0.0522 | 71.0 | 6603 | 0.1999 | 0.2131 | 0.5381 | 0.4619 | 7.7714 |
| 0.0522 | 72.0 | 6696 | 0.1998 | 0.2173 | 0.5429 | 0.4571 | 7.7943 |
| 0.0522 | 73.0 | 6789 | 0.1991 | 0.2156 | 0.5381 | 0.4619 | 7.7829 |
| 0.0522 | 74.0 | 6882 | 0.2024 | 0.2088 | 0.5333 | 0.4667 | 7.72 |
| 0.0522 | 75.0 | 6975 | 0.2053 | 0.2046 | 0.5286 | 0.4714 | 7.7257 |
| 0.0494 | 76.0 | 7068 | 0.2055 | 0.2054 | 0.5333 | 0.4667 | 7.7429 |
| 0.0494 | 77.0 | 7161 | 0.2064 | 0.208 | 0.5333 | 0.4667 | 7.7029 |
| 0.0494 | 78.0 | 7254 | 0.2057 | 0.208 | 0.5286 | 0.4714 | 7.7257 |
| 0.0494 | 79.0 | 7347 | 0.2087 | 0.2097 | 0.5381 | 0.4619 | 7.68 |
| 0.0494 | 80.0 | 7440 | 0.2085 | 0.2131 | 0.5476 | 0.4524 | 7.6971 |
| 0.0462 | 81.0 | 7533 | 0.2099 | 0.2122 | 0.5476 | 0.4524 | 7.6914 |
| 0.0462 | 82.0 | 7626 | 0.2090 | 0.2071 | 0.5286 | 0.4714 | 7.7429 |
| 0.0462 | 83.0 | 7719 | 0.2127 | 0.2088 | 0.5286 | 0.4714 | 7.7086 |
| 0.0462 | 84.0 | 7812 | 0.2135 | 0.2012 | 0.519 | 0.481 | 7.7371 |
| 0.0462 | 85.0 | 7905 | 0.2148 | 0.2029 | 0.519 | 0.481 | 7.7486 |
| 0.0462 | 86.0 | 7998 | 0.2148 | 0.2046 | 0.5238 | 0.4762 | 7.7657 |
| 0.0434 | 87.0 | 8091 | 0.2148 | 0.2029 | 0.519 | 0.481 | 7.7543 |
| 0.0434 | 88.0 | 8184 | 0.2150 | 0.2037 | 0.519 | 0.481 | 7.7657 |
| 0.0434 | 89.0 | 8277 | 0.2160 | 0.2063 | 0.5238 | 0.4762 | 7.7543 |
| 0.0434 | 90.0 | 8370 | 0.2167 | 0.2054 | 0.5238 | 0.4762 | 7.7486 |
| 0.0434 | 91.0 | 8463 | 0.2168 | 0.2037 | 0.519 | 0.481 | 7.7657 |
| 0.0419 | 92.0 | 8556 | 0.2172 | 0.2037 | 0.5238 | 0.4762 | 7.7543 |
| 0.0419 | 93.0 | 8649 | 0.2183 | 0.2037 | 0.5238 | 0.4762 | 7.7486 |
| 0.0419 | 94.0 | 8742 | 0.2190 | 0.2063 | 0.5286 | 0.4714 | 7.7371 |
| 0.0419 | 95.0 | 8835 | 0.2185 | 0.2054 | 0.5238 | 0.4762 | 7.7543 |
| 0.0419 | 96.0 | 8928 | 0.2184 | 0.2054 | 0.5238 | 0.4762 | 7.7543 |
| 0.0402 | 97.0 | 9021 | 0.2190 | 0.208 | 0.5286 | 0.4714 | 7.7429 |
| 0.0402 | 98.0 | 9114 | 0.2189 | 0.208 | 0.5286 | 0.4714 | 7.7486 |
| 0.0402 | 99.0 | 9207 | 0.2190 | 0.208 | 0.5286 | 0.4714 | 7.7486 |
| 0.0402 | 100.0 | 9300 | 0.2191 | 0.208 | 0.5286 | 0.4714 | 7.7486 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Reyden/main
|
Reyden
| 2023-10-25T06:08:44Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-10-11T20:54:43Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: my_awesome_asr_mind_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_asr_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 39.1627
- Wer: 1.0242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 43.2694 | 1.0 | 5 | 39.1627 | 1.0242 |
| 51.2427 | 2.0 | 10 | 39.1627 | 1.0242 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
|
ZixinSONG/sd-class-butterflies-32
|
ZixinSONG
| 2023-10-25T06:02:37Z | 46 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-10-25T06:01:39Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('ZixinSONG/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Kit-Lemonfoot/kitlemonfoot_so-vits-svc-4.0_models
|
Kit-Lemonfoot
| 2023-10-25T05:57:00Z | 0 | 0 | null |
[
"music",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-10-25T05:52:14Z |
---
license: creativeml-openrail-m
tags:
- music
---
# Kit Lemonfoot's So-Vits-SVC Models
This repository exists to host So-Vits-SVC models made by Kit Lemonfoot, also known as Noel Shirogane's High Flying Birds on YouTube.
Please credit me if you use any models in this repository in any way.
(Creator note: Most of these models were cut off in development to focus more effort in developing the RVC scene. I'm releasing them now since I don't really have that much a use for them anymore.)
## Currently Available Models:
- Hoshimachi Suisei [Hololive JP] (Singing)(Glitchy)
- Airani Iofifteen [Hololive ID] (Speaking)
- Pavolia Reine [Hololive ID] (Speaking)
- Nanashi Mumei [Hololive EN] (Speaking)
|
nobodynosql/sql_codellama
|
nobodynosql
| 2023-10-25T05:56:01Z | 13 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"transformer",
"en",
"fr",
"cn",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-19T14:13:45Z |
---
tasks:
- text2sql
model-type:
- glm
domain:
- nlp
frameworks:
- pytorch
backbone:
- transformer
containers:
- registry-vpc.cn-shanghai.aliyuncs.com/cloud-dsw/pytorch:1.8-cpu-py36-ubuntu18.04
- registry-vpc.cn-shanghai.aliyuncs.com/cloud-dsw/tensorflow:1.12-cpu-py36-ubuntu18.04
customized-quickstart: False
finetune-support: False
license: apache-2.0
language:
- en
- fr
- cn
tags:
- transformer
pre-train: False
train:
- spider dataset
- bird dataset
---
# sql_codellama 介绍
SQL-Codellama是一个用于text2SQL的模型。
## 模型底座
它是基于codellama模型构建的,该模型通过使用qlora进行训练。
## 训练数据
训练数据包含了spider、starcode等数据集。这个模型的目标是将自然语言查询转换为SQL查询。
## 功能
Text to SQL( 以下简称Text2SQL),是将自然语言文本(Text)转换成结构化查询语言SQL的过程,属于自然语言处理-语义分析(Semantic Parsing)领域中的子任务。
它的目的可以简单概括为:“打破人与结构化数据之间的壁垒”,即普通用户可以通过自然语言描述完成复杂数据库的查询工作,得到想要的结果。
它通过学习语法、语义和查询意图来理解用户的问题,并根据对应的数据库结构生成相应的SQL查询语句。SQL-Codellama的训练过程经过了大量的数据预处理、特征提取和模型训练,以提高其准确性和性能。
它可以应用于各种领域,如数据分析、数据库查询优化等。SQL-Codellama的设计和训练过程是为了使其能够处理复杂的查询,并产生高质量的SQL查询结果。它的目标是为用户提供准确、高效的文本到SQL转换,从而帮助用户更轻松地进行数据库查询和数据分析。
```bash
git clone https://www.modelscope.cn/tomatoModelScope/sql_codellama.git
```
|
michael-kingston/ppo-LunarLander-v2
|
michael-kingston
| 2023-10-25T05:51:50Z | 8 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-25T02:01:18Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 81.08 +/- 57.43
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
clarissa-koh-chope/distilbert-base-uncased-finetuned-ner_1
|
clarissa-koh-chope
| 2023-10-25T05:45:43Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-10-25T03:25:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner_1
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0665
- Precision: 0.9774
- Recall: 0.9724
- F1: 0.9749
- Accuracy: 0.9909
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0296 | 1.0 | 5237 | 0.0535 | 0.9554 | 0.9612 | 0.9583 | 0.9872 |
| 0.0152 | 2.0 | 10474 | 0.0507 | 0.9643 | 0.9643 | 0.9643 | 0.9880 |
| 0.0133 | 3.0 | 15711 | 0.0558 | 0.9622 | 0.9612 | 0.9617 | 0.9874 |
| 0.0081 | 4.0 | 20948 | 0.0709 | 0.9743 | 0.9663 | 0.9703 | 0.9890 |
| 0.008 | 5.0 | 26185 | 0.0401 | 0.9795 | 0.9755 | 0.9775 | 0.9919 |
| 0.0054 | 6.0 | 31422 | 0.0661 | 0.9764 | 0.9704 | 0.9734 | 0.9898 |
| 0.0013 | 7.0 | 36659 | 0.0492 | 0.9764 | 0.9724 | 0.9744 | 0.9921 |
| 0.0035 | 8.0 | 41896 | 0.0602 | 0.9755 | 0.9735 | 0.9745 | 0.9911 |
| 0.001 | 9.0 | 47133 | 0.0687 | 0.9774 | 0.9724 | 0.9749 | 0.9907 |
| 0.0015 | 10.0 | 52370 | 0.0663 | 0.9774 | 0.9724 | 0.9749 | 0.9909 |
| 0.0012 | 11.0 | 57607 | 0.0665 | 0.9774 | 0.9724 | 0.9749 | 0.9909 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
tmcgirr/BERT-squad-chatbot-AAI
|
tmcgirr
| 2023-10-25T05:44:47Z | 43 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-10-16T17:16:16Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Model trained using BERT ("bert-base-cased") for AAI NLP Course.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
Model trained using BERT ("bert-base-cased") on the SQuAD Dataset.
- **Developed by:** [Trevor McGirr, Eyoha Mengistu, Reed Oken]
- **Model type:** [Transformer]
- **Language(s) (NLP):** [English]
- **License:** [Unknown]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/tmcgirr/SQUAD-BERT-chatbot-AAI
- **Demo:** https://squad-bert-chatbot-aai.streamlit.app/
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Basic QA for questions from the SQuAD Dataset
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Model may not perform well on current events or uncommon language.
## How to Get Started with the Model
Use the Interface API to interact with the model.
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://www.kaggle.com/datasets/stanfordu/stanford-question-answering-dataset/data
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
https://github.com/tmcgirr/SQUAD-BERT-chatbot-AAI/blob/master/chatbot_training/chatbot.ipynb
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
Will be provided soon.
|
akter-sust/Reinforce-model-2-25-10-2023
|
akter-sust
| 2023-10-25T05:38:28Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-25T05:37:31Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-model-2-25-10-2023
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 5.80 +/- 4.75
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
hikarunogo70/Japanese_RVC_Collection
|
hikarunogo70
| 2023-10-25T05:24:19Z | 0 | 0 | null |
[
"ja",
"license:openrail",
"region:us"
] | null | 2023-10-25T05:06:45Z |
---
license: openrail
language:
- ja
---
## 注記
そのコレックションは「RVC V2」のボイスモデル。それじゃ、「RVC V2」をしてください。
|
evrglori-cream/evrglori-cream
|
evrglori-cream
| 2023-10-25T05:16:45Z | 0 | 0 |
diffusers
|
[
"diffusers",
"EvrGlori Cream",
"en",
"license:bsd-2-clause",
"region:us"
] | null | 2023-10-25T05:15:24Z |
---
license: bsd-2-clause
language:
- en
library_name: diffusers
tags:
- EvrGlori Cream
---
[EvrGlori Cream](https://atozsupplement.com/evrglori-cream/) You can find different over-the-counter items intended to eliminate skin tags. These normally come as creams, salves, or fluids that you apply straightforwardly to the skin tag. These arrangements frequently contain fixings like salicylic corrosive or tea tree oil, which can assist with drying out the skin tag over the long run. It's essential to painstakingly adhere to the directions on the item.Tying off the skin tag: This technique includes tying a little string or string firmly around the foundation of the skin tag to supply cut off its blood. Without a blood supply, the skin tag will ultimately tumble off.
VISIT HERE FOR OFFICIAL WEBSITE:-https://atozsupplement.com/evrglori-cream/
|
yroshan/moviescript
|
yroshan
| 2023-10-25T05:09:41Z | 21 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-24T10:37:30Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: moviescript
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# moviescript
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5741
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 47 | 3.6236 |
| No log | 2.0 | 94 | 3.5828 |
| No log | 3.0 | 141 | 3.5741 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
bellagio-ai/Walter-person-xl-dreambooth
|
bellagio-ai
| 2023-10-25T05:00:50Z | 2 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-10-25T01:44:32Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of Walter person
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - bellagio-ai/Walter-person-xl-dreambooth
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of Walter person using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
abh1nav/wav2vec2-large-xls-r-300m-hindi-333
|
abh1nav
| 2023-10-25T04:54:34Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-10-25T04:42:21Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hindi-333
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi-333
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 1.18.3
- Tokenizers 0.14.1
|
lucky1357/gmo-500-0
|
lucky1357
| 2023-10-25T04:48:50Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:adapter:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | 2023-10-25T04:48:46Z |
---
library_name: peft
base_model: ybelkada/falcon-7b-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
Bandika/DollyV1
|
Bandika
| 2023-10-25T04:31:01Z | 2 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:EleutherAI/gpt-j-6b",
"base_model:adapter:EleutherAI/gpt-j-6b",
"region:us"
] | null | 2023-10-25T04:30:59Z |
---
library_name: peft
base_model: EleutherAI/gpt-j-6B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.