modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 00:41:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 00:40:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
rociortizb/predict_rugby
|
rociortizb
| 2023-07-21T15:22:02Z | 4 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2023-07-06T09:10:12Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# predict_rugby
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("rociortizb/predict_rugby")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 40
* Number of training documents: 27774
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| 0 | blacks - new - new zealand - zealand - hurricanes | 1556 | 0_blacks_new_new zealand_zealand |
| 1 | springboks - south - africa - south africa - erasmus | 1297 | 1_springboks_south_africa_south africa |
| 2 | springbok - springboks - year - players - world | 1205 | 2_springbok_springboks_year_players |
| 3 | stormers - lions - van - said - team | 1163 | 3_stormers_lions_van_said |
| 4 | cheetahs - van - griquas - pumas - province | 1149 | 4_cheetahs_van_griquas_pumas |
| 5 | sevens - series - fiji - blitzboks - pool | 994 | 5_sevens_series_fiji_blitzboks |
| 6 | brumbies - reds - rebels - rugby - super | 953 | 6_brumbies_reds_rebels_rugby |
| 7 | premiership - tom - exeter - wasps - saracens | 928 | 7_premiership_tom_exeter_wasps |
| 8 | ireland - sexton - schmidt - irish - leinster | 911 | 8_ireland_sexton_schmidt_irish |
| 9 | france - french - racing - year - club | 904 | 9_france_french_racing_year |
| 10 | try - leinster - minutes - munster - penalty | 876 | 10_try_leinster_minutes_munster |
| 11 | stormers - lions - south - game - team | 870 | 11_stormers_lions_south_game |
| 12 | sharks - du - preez - du preez - bosch | 851 | 12_sharks_du_preez_du preez |
| 13 | wallabies - australia - folau - rugby - said | 837 | 13_wallabies_australia_folau_rugby |
| 14 | england - jones - harlequins - squad - george | 807 | 14_england_jones_harlequins_squad |
| 15 | england - jones - world - world cup - wales | 790 | 15_england_jones_world_world cup |
| 16 | crusaders - highlanders - hurricanes - blues - chiefs | 769 | 16_crusaders_highlanders_hurricanes_blues |
| 17 | italy - france - england - ireland - scotland | 763 | 17_italy_france_england_ireland |
| 18 | wallabies - australia - cheika - said - blacks | 736 | 18_wallabies_australia_cheika_said |
| 19 | disciplinary - committee - foul play - foul - player | 722 | 19_disciplinary_committee_foul play_foul |
| 20 | clermont - stade - montpellier - toulon - toulouse | 688 | 20_clermont_stade_montpellier_toulon |
| 21 | blacks - new - zealand - new zealand - foster | 673 | 21_blacks_new_zealand_new zealand |
| 22 | wales - davies - ospreys - scarlets - cardiff | 666 | 22_wales_davies_ospreys_scarlets |
| 23 | bulls - van - stormers - lions - sharks | 660 | 23_bulls_van_stormers_lions |
| 24 | bulls - van - white - rugby - loftus | 615 | 24_bulls_van_white_rugby |
| 25 | rugby - super - super rugby - competition - new | 531 | 25_rugby_super_super rugby_competition |
| 26 | scotland - glasgow - edinburgh - townsend - russell | 529 | 26_scotland_glasgow_edinburgh_townsend |
| 27 | brumbies - waratahs - reds - rebels - force | 521 | 27_brumbies_waratahs_reds_rebels |
| 28 | pro14 - leinster - ulster - scarlets - 19 | 490 | 28_pro14_leinster_ulster_scarlets |
| 29 | rugby - world - world rugby - nations - cup | 467 | 29_rugby_world_world rugby_nations |
| 30 | argentina - santiago - pumas - juan - matias | 447 | 30_argentina_santiago_pumas_juan |
| 31 | club - premiership - season - rugby - gloucester | 436 | 31_club_premiership_season_rugby |
| 32 | club - premiership - saracens - wasps - salary | 423 | 32_club_premiership_saracens_wasps |
| 33 | gatland - lions - wales - tour - barbarians | 361 | 33_gatland_lions_wales_tour |
| 34 | africa - south africa - south - zealand - new zealand | 339 | 34_africa_south africa_south_zealand |
| 35 | marais - saru - union - rugby - president | 317 | 35_marais_saru_union_rugby |
| 36 | kings - southern kings - southern - davids - schalk | 217 | 36_kings_southern kings_southern_davids |
| 37 | vs - referees - match official - official - assistant referees | 171 | 37_vs_referees_match official_official |
| 38 | sunwolves - japan - super - super rugby - 15 | 116 | 38_sunwolves_japan_super_super rugby |
| 39 | burgess - lancaster - england - bath - union | 26 | 39_burgess_lancaster_england_bath |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: [['United Rugby Championship', 'Ireland', 'Wales', 'Scotland', 'South Africa', 'Italian', 'Pro14'], ['Pro14', 'Edinburgh', 'Glasgow', 'Scarlets', 'Ospreys', 'Zebre', 'Benetton', 'Connacht', 'Leinster', 'Ulster', 'Munster'], ['European Cup', 'European', 'Heineken', 'competition', 'Toulon', 'Saracens', 'Leinster'], ['Premiership', 'England', 'Exeter', 'Saracens', 'Wasps', 'Leicester', 'Harlequins', 'Sale', 'Bristol', 'Northampton'], ['Sevens', 'Fiji', 'New Zealand', 'South Africa', 'England', 'Australia', 'series', 'HSBC', 'Olympics'], ['Super Rugby', 'New Zealand', 'Australia', 'South Africa', 'Argentina', 'Japan', 'Blues', 'Brumbies', 'Crusaders', 'Sharks', 'Stormers'], ['Six Nations', 'England', 'Wales', 'Ireland', 'Scotland', 'France', 'Italy', 'Championship', 'Grand Slam'], ['Currie Cup', 'South Africa', 'Bulls', 'Lions', 'Sharks', 'Cheetahs', 'Western Province', 'domestic', 'provincial'], ['World Cup', 'international', 'New Zealand', 'Australia', 'South Africa', 'England', 'Wales', 'France'], ['Rugby Championship', 'New Zealand', 'Australia', 'South Africa', 'Argentina', 'All Blacks', 'Wallabies', 'Springboks', 'Pumas'], ['British Irish Lions', 'South Africa', 'New Zealand', 'Australia']]
* top_n_words: 30
* verbose: True
## Framework versions
* Numpy: 1.21.0
* HDBSCAN: 0.8.29
* UMAP: 0.5.3
* Pandas: 2.0.2
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.29.2
* Numba: 0.57.0
* Plotly: 5.14.1
* Python: 3.9.6
|
sergeindamix/llama2-qlora-finetunined-frenchTest
|
sergeindamix
| 2023-07-21T15:21:52Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T15:21:43Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
jayanta/bert-base-cased-sentweet-hatespeech
|
jayanta
| 2023-07-21T15:17:35Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-21T15:06:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-base-cased-sentweet-hatespeech
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-sentweet-hatespeech
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4203
- Accuracy: 0.8299
- Precision: 0.8430
- Recall: 0.8353
- F1: 0.8294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 81 | 0.4203 | 0.8299 | 0.8430 | 0.8353 | 0.8294 |
| No log | 2.0 | 162 | 0.4448 | 0.7917 | 0.7921 | 0.7930 | 0.7916 |
| No log | 3.0 | 243 | 0.4748 | 0.7812 | 0.7806 | 0.7812 | 0.7808 |
| No log | 4.0 | 324 | 0.5806 | 0.7674 | 0.7674 | 0.7653 | 0.7659 |
| No log | 5.0 | 405 | 0.7538 | 0.7917 | 0.7922 | 0.7895 | 0.7902 |
| No log | 6.0 | 486 | 0.8612 | 0.7847 | 0.7852 | 0.7825 | 0.7832 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1+cu117
- Datasets 2.6.1
- Tokenizers 0.11.0
|
7erminalVelociraptor/Airochronos-33b-Guanaco
|
7erminalVelociraptor
| 2023-07-21T15:01:12Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-14T19:09:22Z |
This is [Henk717's merge of Chronos and Airoboros 1.4](https://huggingface.co/Henk717/airochronos-33B) with [Timdetmer's Guanaco](https://huggingface.co/timdettmers/guanaco-33b) applied as a lora.
Mainly intended for character roleplay and creative writing. Initial testing suggests it does a reasonable job at this, but it is too early to say how it compares to Airochronos. Other tasks such as coding or logic has not been reviewed.
The model has been tested with Alpaca's prompt style ( ### Instruction: and ### Response: ), as this is what Chronos and Guanaco use.
Keep in mind that all parts of this model are not censored, and thus can output NSFW or other unfiltered content. Use at your own discretion.
|
vineetsharma/Reinforce-Pixelcopter-PLE-v0
|
vineetsharma
| 2023-07-21T14:55:15Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T09:47:00Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 37.30 +/- 27.43
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Falah/stable_diffusion_prompts_dataset_fine_tune_llms_qlora
|
Falah
| 2023-07-21T14:45:38Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T14:08:53Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
gFulvio/moralstories-bart-norm.action-context_gen
|
gFulvio
| 2023-07-21T14:39:17Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"dataset:demelin/moral_stories",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-21T14:35:42Z |
---
datasets:
- demelin/moral_stories
---
|
Evan-Lin/Bart-RL-many-keywordmax1-attractive1-reward1-epoch2
|
Evan-Lin
| 2023-07-21T14:37:01Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-07-21T04:16:11Z |
---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Evan-Lin//tmp/tmpl28h_pjf/Evan-Lin/Bart-RL-many-keywordmax1-attractive1-reward1-epoch2")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmpl28h_pjf/Evan-Lin/Bart-RL-many-keywordmax1-attractive1-reward1-epoch2")
model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmpl28h_pjf/Evan-Lin/Bart-RL-many-keywordmax1-attractive1-reward1-epoch2")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
abertsch/bart-base-summscreen
|
abertsch
| 2023-07-21T14:32:16Z | 108 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"feature-extraction",
"text2text-generation",
"dataset:yuvalkirstain/summ_screen_fd_t5_lm",
"dataset:urialon/summ_screen_validation",
"dataset:urialon/summ_screen_test",
"arxiv:2305.01625",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-03T14:55:16Z |
---
datasets:
- yuvalkirstain/summ_screen_fd_t5_lm
- urialon/summ_screen_validation
- urialon/summ_screen_test
pipeline_tag: text2text-generation
---
Baseline model for the preprint [Unlimiformer: Long-Range Transformers with Unlimited Length Input](https://arxiv.org/abs/2305.01625).
This model was finetuned from a BART-base model as a baseline. It was finetuned on the dataset SummScreen using the data preprocessing pipeline from SLED; to load the validation or test set for use with these model, please use the datasets [urialon/summ_screen_validation](https://huggingface.co/datasets/urialon/summ_screen_validation) and [urialon/summ_screen_test](https://huggingface.co/datasets/urialon/summ_screen_test).
|
abertsch/unlimiformer-bart-govreport-earlyk
|
abertsch
| 2023-07-21T14:32:14Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"feature-extraction",
"text2text-generation",
"dataset:ccdv/govreport-summarization",
"dataset:urialon/gov_report_validation",
"dataset:urialon/gov_report_test",
"arxiv:2305.01625",
"region:us"
] |
text2text-generation
| 2023-05-03T14:52:23Z |
---
datasets:
- ccdv/govreport-summarization
- urialon/gov_report_validation
- urialon/gov_report_test
pipeline_tag: text2text-generation
inference: false
---
Model from the preprint [Unlimiformer: Long-Range Transformers with Unlimited Length Input](https://arxiv.org/abs/2305.01625)
This is a BART-base model finetuned using Unlimiformer-aware early stopping, as described in section 3.1 of the paper. The model was finetuned on GovReport using the data processing pipeline from SLED; to load the validation or test set for use with these model, please use the datasets [urialon/gov_report_validation](https://huggingface.co/datasets/urialon/gov_report_validation) and [urialon/gov_report_test](https://huggingface.co/datasets/urialon/gov_report_test).
This is generally a weaker model than the [alternating-training model](https://huggingface.co/abertsch/unlimiformer-bart-govreport-alternating) and a stronger model than the [baseline](https://huggingface.co/abertsch/bart-base-govreport).
*The inference demo is disabled because you must add the Unlimiformer files to your repo before this model can handle unlimited length input!* See the [Unlimiformer GitHub](https://github.com/abertsch72/unlimiformer) for setup instructions.
|
abertsch/unlimiformer-bart-summscreen-retrieval
|
abertsch
| 2023-07-21T14:32:10Z | 106 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"feature-extraction",
"text2text-generation",
"dataset:yuvalkirstain/summ_screen_fd_t5_lm",
"dataset:urialon/summ_screen_validation",
"dataset:urialon/summ_screen_test",
"arxiv:2305.01625",
"region:us"
] |
text2text-generation
| 2023-05-03T14:57:24Z |
---
datasets:
- yuvalkirstain/summ_screen_fd_t5_lm
- urialon/summ_screen_validation
- urialon/summ_screen_test
pipeline_tag: text2text-generation
inference: false
---
Model from the preprint [Unlimiformer: Long-Range Transformers with Unlimited Length Input](https://arxiv.org/abs/2305.01625).
This model was finetuned from a BART-base model using the retrieval-augmented training strategy described in section 3.2 of the paper. It was finetuned on the dataset SummScreen using the data preprocessing pipeline from SLED; to load the validation or test set for use with these model, please use the datasets [urialon/summ_screen_validation](https://huggingface.co/datasets/urialon/summ_screen_validation) and [urialon/summ_screen_test](https://huggingface.co/datasets/urialon/summ_screen_test).
This is the strongest of the Unlimiformer models for SummScreen.
*The inference demo is disabled because you must add the Unlimiformer files to your repo before this model can handle unlimited length input!* See the [Unlimiformer GitHub](https://github.com/abertsch72/unlimiformer) for setup instructions.
|
usmanhaider95/falcon7b_dense
|
usmanhaider95
| 2023-07-21T14:21:47Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-20T12:11:23Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
Aspik101/Llama-2-7b-chat-hf-pl-lora_adapter_model
|
Aspik101
| 2023-07-21T14:20:48Z | 0 | 0 | null |
[
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"text-generation",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"region:us"
] |
text-generation
| 2023-07-21T14:20:46Z |
---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
|
Mel-Iza0/RedPajama-ZeroShot-20K-classe_bias
|
Mel-Iza0
| 2023-07-21T14:13:20Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"gpt_neox",
"region:us"
] | null | 2023-07-21T11:55:28Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Mizuiro-sakura/luke-japanese-base-finetuned-QA
|
Mizuiro-sakura
| 2023-07-21T14:11:02Z | 163 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"luke",
"question-answering",
"squad",
"question answering",
"ja",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-01-15T23:38:30Z |
---
license: mit
language: ja
tags:
- luke
- question-answering
- squad
- pytorch
- transformers
- question answering
---
# このモデルはluke-japanese-base-liteをファインチューニングして、Question-Answeringに用いれるようにしたものです。
このモデルはluke-japanese-base-liteを運転ドメインQAデータセット(DDQA)( https://nlp.ist.i.kyoto-u.ac.jp/index.php?Driving%20domain%20QA%20datasets )を用いてファインチューニングしたものです。
Question-Answeringタスク(SQuAD)に用いることができます。
# This model is fine-tuned model for Question-Answering which is based on luke-japanese-base-lite
This model is fine-tuned by using DDQA dataset.
You could use this model for Question-Answering tasks.
# モデルの精度 accuracy of model
'em(厳密一致)': 0.845933014354067, 'f1': 0.9197176274789681
# How to use 使い方
sentencepieceとtransformersをインストールして
(pip install sentencepiece , pip install transformers)
以下のコードを実行することで、Question-Answeringタスクを解かせることができます。
please execute this code.
```python
import torch
from transformers import AutoTokenizer, LukeForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained('Mizuiro-sakura/luke-japanese-base-finetuned-QA')
model=LukeForQuestionAnswering.from_pretrained('Mizuiro-sakura/luke-japanese-base-finetuned-QA') # 学習済みモデルの読み込み
text={
'context':'私の名前はEIMIです。好きな食べ物は苺です。 趣味は皆さんと会話することです。',
'question' :'好きな食べ物は何ですか'
}
input_ids=tokenizer.encode(text['question'],text['context']) # tokenizerで形態素解析しつつコードに変換する
output= model(torch.tensor([input_ids])) # 学習済みモデルを用いて解析
prediction = tokenizer.decode(input_ids[torch.argmax(output.start_logits): torch.argmax(output.end_logits)]) # 答えに該当する部分を抜き取る
print(prediction)
```
# what is Luke? Lukeとは?[1]
LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transformer. LUKE treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. LUKE adopts an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores.
LUKE achieves state-of-the-art results on five popular NLP benchmarks including SQuAD v1.1 (extractive question answering), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), TACRED (relation classification), and Open Entity (entity typing). luke-japaneseは、単語とエンティティの知識拡張型訓練済み Transformer モデルLUKEの日本語版です。LUKE は単語とエンティティを独立したトークンとして扱い、これらの文脈を考慮した表現を出力します。
# Acknowledgments 謝辞
Lukeの開発者である山田先生とStudio ousiaさんには感謝いたします。 I would like to thank Mr.Yamada @ikuyamada and Studio ousia @StudioOusia.
# Citation
[1]@inproceedings{yamada2020luke, title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention}, author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto}, booktitle={EMNLP}, year={2020} }
|
Mizuiro-sakura/deberta-v2-large-japanese-finetuned-ner
|
Mizuiro-sakura
| 2023-07-21T14:10:02Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"token-classification",
"deberta",
"named entity recognition",
"named-entity-recognition",
"ner",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"dataset:oscar",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-10T13:22:23Z |
---
license: mit
language: ja
library_name: transformers
tags:
- pytorch
- deberta
- deberta-v2
- named entity recognition
- named-entity-recognition
- ner
datasets:
- wikipedia
- cc100
- oscar
metrics:
- accuracy
---
# このモデルはdeberta-v2-large-japaneseをファインチューニングして固有表現抽出(NER)に用いれるようにしたものです。
このモデルはdeberta-v2-large-japaneseを Wikipediaを用いた日本語の固有表現抽出データセット(ストックマーク社、https://github.com/stockmarkteam/ner-wikipedia-dataset )を用いてファインチューニングしたものです。
# This model is fine-tuned model for Named Entity Recognition (NER) which is based on deberta-v2-large-japanese
This model is fine-tuned by using Wikipedia dataset.
You could use this model for NER tasks.
# How to use 使い方
transformersおよびpytorch、sentencepiece、Juman++をインストールしてください。
以下のコードを実行することで、固有表現抽出タスクを解かせることができます。 please execute this code.
```python
from transformers import AutoTokenizer,pipeline, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained('Mizuiro-sakura/deberta-v2-large-japanese-finetuned-ner')
model=AutoModelForTokenClassification.from_pretrained('Mizuiro-sakura/deberta-v2-large-japanese-finetuned-ner') # 学習済みモデルの読み込み
text=('昨日は東京で買い物をした')
ner=pipeline('ner', model=model, tokenizer=tokenizer)
result=ner(text)
print(result)
```
# モデルの精度 accuracy of model
全体:0.7974729241877256
precision recall f1-score support
その他の組織名 0.72 0.72 0.72 238
イベント名 0.73 0.85 0.79 215
人名 0.83 0.89 0.86 547
地名 0.79 0.80 0.80 446
政治的組織名 0.78 0.83 0.80 263
施設名 0.74 0.84 0.79 241
法人名 0.84 0.80 0.82 487
製品名 0.65 0.78 0.71 252
micro avg 0.77 0.82 0.80 2689
macro avg 0.76 0.82 0.79 2689
weighted avg 0.78 0.82 0.80 2689
# deberta-v2-base-japaneseとは?
日本語Wikipedeia(3.2GB)および、cc100(85GB)、oscar(54GB)を用いて訓練されたモデルです。
京都大学黒橋研究室が公表されました。
# Model description
This is a Japanese DeBERTa V2 base model pre-trained on Japanese Wikipedia, the Japanese portion of CC-100, and the Japanese portion of OSCAR.
# Acknowledgments 謝辞
モデルを公開してくださった京都大学黒橋研究室には感謝いたします。
I would like to thank Kurohashi Lab at Kyoto University.
|
paldhikar/llama2-qlora-finetunined-odaqna
|
paldhikar
| 2023-07-21T14:06:07Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T14:05:51Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
chandan9t8/poca-SoccerTwos
|
chandan9t8
| 2023-07-21T13:56:16Z | 17 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-07-21T13:54:52Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: chandan9t8/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
casque/bukkake_v0.4
|
casque
| 2023-07-21T13:55:50Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-21T13:54:53Z |
---
license: creativeml-openrail-m
---
|
mohadfarag1/finetuning-emotion-model
|
mohadfarag1
| 2023-07-21T13:52:00Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-21T09:17:47Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-emotion-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-emotion-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2250
- Accuracy: 0.9225
- F1: 0.9223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3299 | 0.9035 | 0.9019 |
| 0.5464 | 2.0 | 500 | 0.2250 | 0.9225 | 0.9223 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
|
KingKazma/cnn_dailymail_t5-small_p_tuning_500_10_3000_8_e-1_s6789_v3_l5_v50_manual
|
KingKazma
| 2023-07-21T13:49:18Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T13:49:17Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
oleksandrfluxon/mpt-7b-instruct
|
oleksandrfluxon
| 2023-07-21T13:31:28Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"custom_code",
"dataset:mosaicml/dolly_hhrlhf",
"arxiv:2205.14135",
"arxiv:2108.12409",
"arxiv:2010.04245",
"license:cc-by-sa-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T09:32:13Z |
---
license: cc-by-sa-3.0
datasets:
- mosaicml/dolly_hhrlhf
tags:
- Composer
- MosaicML
- llm-foundry
inference: false
duplicated_from: mosaicml/mpt-7b-instruct
---
# MPT-7B-Instruct
MPT-7B-Instruct is a model for short-form instruction following.
It is built by finetuning [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) on a [dataset](https://huggingface.co/datasets/sam-mosaic/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
* License: _CC-By-SA-3.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
May 5, 2023
## Model License
CC-By-SA-3.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Question/Instruction
**Longboi24**:
> What is a quoll?
**MPT-7B-Instruct**:
>A Quoll (pronounced “cool”) is one of Australia’s native carnivorous marsupial mammals, which are also known as macropods or wallabies in other parts around Asia and South America
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-instruct',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
### Formatting
This model was trained on data formatted in the dolly-15k format:
```python
INSTRUCTION_KEY = "### Instruction:"
RESPONSE_KEY = "### Response:"
INTRO_BLURB = "Below is an instruction that describes a task. Write a response that appropriately completes the request."
PROMPT_FOR_GENERATION_FORMAT = """{intro}
{instruction_key}
{instruction}
{response_key}
""".format(
intro=INTRO_BLURB,
instruction_key=INSTRUCTION_KEY,
instruction="{instruction}",
response_key=RESPONSE_KEY,
)
example = "James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week? Explain before answering."
fmt_ex = PROMPT_FOR_GENERATION_FORMAT.format(instruction=example)
```
In the above example, `fmt_ex` is ready to be tokenized and sent through the model.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## PreTraining Data
For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 8 A100-40GBs for about 2.3 hours using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-Instruct was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
mccoole/setfit-distiluse-base-multilingual-cased-v2-finetuned-amazon-reviews-multi-binary
|
mccoole
| 2023-07-21T13:30:09Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-21T13:29:11Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# mccoole/setfit-distiluse-base-multilingual-cased-v2-finetuned-amazon-reviews-multi-binary
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("mccoole/setfit-distiluse-base-multilingual-cased-v2-finetuned-amazon-reviews-multi-binary")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
DeepPavlov/t5-wikidata5M-with-neighbors
|
DeepPavlov
| 2023-07-21T13:29:51Z | 123 | 3 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-21T13:09:37Z |
---
license: openrail
language:
- en
metrics:
- accuracy
pipeline_tag: text2text-generation
widget:
- text: "predict [SEP] Arman Kirakossian country of citizenship [SEP] place of birth Yerevan [SEP] instance of human [SEP] occupation diplomat [SEP] occupation historian [SEP] ethnic group Armenians [SEP]"
example_title: "Predict country of citizenship"
---
This is a t5-small model trained on the wikidata5M dataset.
This model was trained on tail and entity prediction in a knowledge graph using the graph's context represented by the node's neighborhood.
Textual representation was obtained from wikidata entities and relation titles. Entity description was used to disambiguate if two entities had the same title. If still, no disambiguation was possible, we assigned unique numerical ids for such entities.
The neighborhood for the input was obtained as follows:
1. sort the neighborhood by semantic similarity of relations from its triplets to the relation from the input triplet in order to prioritize more important information in the context;
2. limit the sorted neighborhood to 512 triplets, since this will always be at least as big as the size of the allowed context, and, after verbalization, specify the maximum length of 512 for the model tokenizer to fit the resulting verbalized neighborhood representation into the language model context.
Neighborhood sorting by semantic proximity was performed using a pre-calculated matrix of cosine similarity of relations in KG, for similarity calculation the relations were embedded by the fasttext model.
We trained the model on the Wikidata5M dataset for approximately 5M iterations on 8xA100 GPUs using a batch size of 320.
To evaluate the model, we sample 50 times from the decoder for each input and then rank the predictions by their log probabilities. We achieve 0.319 Hits@1 on the test set.
One can load this model for their personal use of fine-tuning as follows:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("DeepPavlov/t5-wikidata5M-with-neighbors")
model = AutoModelForSeq2SeqLM.from_pretrained("DeepPavlov/t5-wikidata5M-with-neighbors")
```
|
dnarqq/q-Taxi-v3
|
dnarqq
| 2023-07-21T13:27:29Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T13:27:25Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="dnarqq/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dnarqq/q-FrozenLake-v1-4x4-noSlippery
|
dnarqq
| 2023-07-21T13:21:53Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T13:21:46Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dnarqq/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
digiplay/CoffeeMix_v2
|
digiplay
| 2023-07-21T13:10:52Z | 819 | 8 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-07T03:31:07Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/40630?modelVersionId=53475
Original Author's DEMO image:

|
tobijen/pegasus_left_heading
|
tobijen
| 2023-07-21T13:10:05Z | 172 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text-generation",
"generated_from_trainer",
"base_model:google/pegasus-large",
"base_model:finetune:google/pegasus-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T12:27:16Z |
---
base_model: google/pegasus-large
tags:
- generated_from_trainer
model-index:
- name: pegasus_left_heading
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus_left_heading
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 114 | 3.3103 |
| No log | 2.0 | 228 | 2.6815 |
| No log | 3.0 | 342 | 2.5226 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
noavm/layoutlmv3-final-v4-BI
|
noavm
| 2023-07-21T13:05:53Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-21T10:12:44Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-final-v4-BI
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-final-v4-BI
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1471
- Precision: 0.5571
- Recall: 0.4728
- F1: 0.5115
- Accuracy: 0.4681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.26 | 10 | 3.1260 | 0.1518 | 0.1149 | 0.1308 | 0.1478 |
| No log | 0.51 | 20 | 2.9269 | 0.1409 | 0.0471 | 0.0706 | 0.1592 |
| No log | 0.77 | 30 | 2.7697 | 0.2328 | 0.1102 | 0.1496 | 0.2116 |
| No log | 1.03 | 40 | 2.6158 | 0.4015 | 0.2895 | 0.3364 | 0.3455 |
| No log | 1.28 | 50 | 2.4704 | 0.4495 | 0.3486 | 0.3927 | 0.3784 |
| No log | 1.54 | 60 | 2.3492 | 0.4964 | 0.4070 | 0.4473 | 0.4195 |
| No log | 1.79 | 70 | 2.2643 | 0.5243 | 0.4369 | 0.4766 | 0.4416 |
| No log | 2.05 | 80 | 2.1982 | 0.5454 | 0.4588 | 0.4984 | 0.4574 |
| No log | 2.31 | 90 | 2.1600 | 0.5516 | 0.4688 | 0.5068 | 0.4637 |
| No log | 2.56 | 100 | 2.1471 | 0.5571 | 0.4728 | 0.5115 | 0.4681 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 1.8.0+cu101
- Datasets 2.12.0
- Tokenizers 0.13.3
|
oleksandrfluxon/mpt-7b-chat-4bit
|
oleksandrfluxon
| 2023-07-21T13:04:26Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"custom_code",
"dataset:jeffwan/sharegpt_vicuna",
"dataset:Hello-SimpleAI/HC3",
"dataset:tatsu-lab/alpaca",
"dataset:Anthropic/hh-rlhf",
"dataset:victor123/evol_instruct_70k",
"arxiv:2205.14135",
"arxiv:2108.12409",
"arxiv:2010.04245",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-19T14:35:09Z |
---
license: cc-by-nc-sa-4.0
datasets:
- jeffwan/sharegpt_vicuna
- Hello-SimpleAI/HC3
- tatsu-lab/alpaca
- Anthropic/hh-rlhf
- victor123/evol_instruct_70k
tags:
- Composer
- MosaicML
- llm-foundry
inference: false
duplicated_from: mosaicml/mpt-7b-chat
---
# MPT-7B-Chat
MPT-7B-Chat is a chatbot-like model for dialogue generation.
It was built by finetuning [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3),
[Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets.
* License: _CC-By-NC-SA-4.0_ (non-commercial use only)
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-chat)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
May 5, 2023
## Model License
_CC-By-NC-SA-4.0_ (non-commercial use only)
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Dialogue
**SamIAm85**:
>I want you to come up with a tweet based on this summary of the article:
>"Introducing MPT-7B, the latest entry in our MosaicML Foundation Series.
>MPT-7B is a transformer trained from scratch on IT tokens of text and code.
>It is open source and it matches the quality of LLaMA-7B.
>MPT-7B was trained on the MosaicML platform in 9.5 days with zero human intervention at a cost of ~$200k.
>Starting today, you can train, finetune, and deploy your own private MPT models, either starting from one of our checkpoints or training from scratch.
>For inspiration, we are also releasing three finetuned models in addition to the base MPT-7B: MPT-7B-Instruct, MPT-7B-Chat, and MPT-7B-Story Writer-65k+, the last of which uses a context length of 65k tokens!
**MPT-7B-Instruct**:
>Here's my suggestion for a tweet:
>"Exciting news! @MosaicML has launched #MPT_7B - a new standard for open-source LLMs. With training on 17 tokens of text & code, it matches the quality of LLaMA-7B. Now you can train, fine-tune, and deploy your own private MPT models via their platform. Check out the full blog post here: https://bit.ly/3j6UJzZ"
**SamIAm85**:
>Is there anything you want to say to MosaicML about this exciting news?
**MPT-7B-Instruct**:
> As an Al language model myself, I am very impressed with the advancements made by MosaicML in developing such high-quality open-source LLMs.
> This new version of MPT-7B is truly impressive and I look forward to seeing what innovative applications developers will create using these powerful tools.
> Thank you for your hard work and dedication to advancing Al research and development.
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-chat',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b-chat'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b-chat'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
### Training Configuration
This model was trained on 8 A100-80GBs for about 8.2 hours, followed by training for 6.7 hours on 32 A100-40GBs using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-Chat can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-Chat was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens and the MosaicML NLP team
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source,
ly Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
arham061/arham-lora
|
arham061
| 2023-07-21T12:50:40Z | 2 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-20T06:49:59Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - arham061/arham-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the arham061/arham dataset. You can find some example images in the following.




|
fadliaulawi/mt5-small-finetuned-amazon-en-es
|
fadliaulawi
| 2023-07-21T12:48:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-19T14:22:02Z |
---
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_keras_callback
model-index:
- name: fadliaulawi/mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# fadliaulawi/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 10.5412
- Validation Loss: 5.4026
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 1209, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.5412 | 5.4026 | 0 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sazyou-roukaku/sazyou_LoRA
|
sazyou-roukaku
| 2023-07-21T12:30:25Z | 0 | 27 | null |
[
"text-to-image",
"ja",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-18T11:16:51Z |
---
license: creativeml-openrail-m
language:
- ja
pipeline_tag: text-to-image
---
LECOやLoRAの試作品置き場。<br>
<br>
**①胸部増減スライダーLoRA(huge_breasts_woman/flat_chest_woman)** <br>
LECOで作成し、調整した胸部増減スライダー:トリガーワード woman<br>
breasts系プロンプトは不要。<br>
推論・サンプルはLittleStepMix_Aで作成。<img src="https://huggingface.co/sazyou-roukaku/sazyou_LoRA/resolve/main/huge_breasts_woman.jpg" width="100%" height="100%">
<img src="https://huggingface.co/sazyou-roukaku/sazyou_LoRA/resolve/main/flat_chest_woman.jpg" width="100%" height="100%">
<br>
<br>
**②マルチカラードヘアーLoRA(pastel_hair_full/pastel_hair_A/pastel_hair_B)** <br>
LECOで作成し、調整した多彩色の髪にするLoRA:トリガーワード hair<br>
長さ指定だけだとかなりカラフルになります。メインの色を入れても良し、服装への汚染も最低限です。<br>
なおnegativeに *(black hair,brown hair:1.5)* を入力推奨<br>
LittleStepMix_Aで作成し、それをサンプルとしているので出方はモデルでかなり異なります。<br>
fullは全ての色が出て、パステル調が強めです。Aは白系が出ず、緑弱め。Bはfullよりメリハリがある出力になります。
<img src="https://huggingface.co/sazyou-roukaku/sazyou_LoRA/resolve/main/pastel_hair.jpg" width="100%" height="100%">
|
oshizo/comment-generation-japanese-3.6b-lora
|
oshizo
| 2023-07-21T12:30:03Z | 0 | 4 | null |
[
"ja",
"license:mit",
"region:us"
] | null | 2023-07-21T11:52:15Z |
---
license: mit
language:
- ja
---
# Overview
YouTube Liveなどのライブ配信での視聴者コメントのようなテキストを生成するモデルです。
[rinna/japanese-gpt-neox-3.6b-instruction-ppo](https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo)をLoraで学習したadapter_modelのみをアップロードしました。
This model generates text like viewer comments in live streaming, such as YouTube Live. This model was trained on [rinna/japanese-gpt-neox-3.6b-instruction-ppo](https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo) using Lora.
# How to use the model
~~~~python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt-neox-3.6b-instruction-ppo", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt-neox-3.6b-instruction-ppo", torch_dtype=torch.float16, device_map="auto")
from peft import PeftModel
peft_model = PeftModel.from_pretrained(model, "oshizo/comment-generation-japanese-3.6b-lora", device_map="auto")
prompt = f"ユーザー: 今朝うちの小さな畑でトマトがね、いい感じに赤くなってたんだよね。そのまま通学路を歩いてたんだけどさ、一つちぎって弁当に入れておけば良かっな~と思って。トマト可愛くて好き。<NL>システム: "
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
do_sample=True,
max_new_tokens=32,
num_return_sequences=4,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id
)
for output in output_ids.tolist():
print(tokenizer.decode(output[token_ids.size(1):], skip_special_tokens=True))
# これから剥くの面倒くさいよ<NL>
# なんやその可愛い好きは<NL>
# 冷やしておくと美味しいよな<NL>
# 食レポ具体的に<NL>
~~~~
|
intellya22/test-model-000
|
intellya22
| 2023-07-21T12:11:48Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-21T12:08:39Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5956 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 2978,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 596,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
NasimB/guten-2p5k-new-loop-tokenize
|
NasimB
| 2023-07-21T11:56:52Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T09:14:56Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten-2p5k-new-loop-tokenize
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten-2p5k-new-loop-tokenize
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.721 | 0.31 | 500 | 5.7074 |
| 5.3696 | 0.63 | 1000 | 5.2582 |
| 5.0072 | 0.94 | 1500 | 5.0135 |
| 4.7225 | 1.26 | 2000 | 4.8583 |
| 4.5837 | 1.57 | 2500 | 4.7320 |
| 4.4669 | 1.89 | 3000 | 4.6175 |
| 4.2663 | 2.2 | 3500 | 4.5607 |
| 4.1693 | 2.51 | 4000 | 4.4896 |
| 4.1248 | 2.83 | 4500 | 4.4286 |
| 3.976 | 3.14 | 5000 | 4.4119 |
| 3.8481 | 3.46 | 5500 | 4.3787 |
| 3.8327 | 3.77 | 6000 | 4.3406 |
| 3.7401 | 4.09 | 6500 | 4.3356 |
| 3.5641 | 4.4 | 7000 | 4.3274 |
| 3.5468 | 4.71 | 7500 | 4.3126 |
| 3.5201 | 5.03 | 8000 | 4.3081 |
| 3.3625 | 5.34 | 8500 | 4.3132 |
| 3.3604 | 5.66 | 9000 | 4.3114 |
| 3.36 | 5.97 | 9500 | 4.3106 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Ahmet2250/ppo-Huggy
|
Ahmet2250
| 2023-07-21T11:52:33Z | 10 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-21T11:52:22Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Ahmet2250/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
muditash/flan-t5-large-financial-phrasebank-lora
|
muditash
| 2023-07-21T11:46:52Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T11:34:37Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
guardrail/llama-2-7b-guanaco-instruct-sharded
|
guardrail
| 2023-07-21T11:46:32Z | 1,609 | 8 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:timdettmers/openassistant-guanaco",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T11:08:14Z |
---
license: apache-2.0
datasets:
- timdettmers/openassistant-guanaco
pipeline_tag: text-generation
---
Model that is fine-tuned in 4-bit precision using QLoRA on [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) and sharded to be used on a free Google Colab instance that can be loaded with 4bits.
It can be easily imported using the `AutoModelForCausalLM` class from `transformers`:
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"guardrail/llama-2-7b-guanaco-instruct-sharded",
load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
```
|
Fire-Hound/llama2-qlora-finetunined-french
|
Fire-Hound
| 2023-07-21T11:42:46Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T11:42:42Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
monster-labs/control_v1p_sd15_qrcode_monster
|
monster-labs
| 2023-07-21T11:35:31Z | 124,386 | 1,383 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"controlnet",
"qrcode",
"en",
"license:openrail++",
"region:us"
] | null | 2023-06-24T15:07:20Z |
---
tags:
- stable-diffusion
- controlnet
- qrcode
license: openrail++
language:
- en
---
# Controlnet QR Code Monster v2 For SD-1.5

## Model Description
This model is made to generate creative QR codes that still scan.
Keep in mind that not all generated codes might be readable, but you can try different parameters and prompts to get the desired results.
**NEW VERSION**
Introducing the upgraded version of our model - Controlnet QR code Monster v2.
V2 is a huge upgrade over v1, for scannability AND creativity.
QR codes can now seamlessly blend the image by using a gray-colored background (#808080).
As with the former version, the readability of some generated codes may vary, however playing around with parameters and prompts could yield better results.
You can find in in the `v2/` subfolder.
## How to Use
- **Condition**: QR codes are passed as condition images with a module size of 16px. Use a higher error correction level to make it easier to read (sometimes a lower level can be easier to read if smaller in size). Use a gray background for the rest of the image to make the code integrate better.
- **Prompts**: Use a prompt to guide the QR code generation. The output will highly depend on the given prompt. Some seem to be really easily accepted by the qr code process, some will require careful tweaking to get good results.
- **Controlnet guidance scale**: Set the controlnet guidance scale value:
- High values: The generated QR code will be more readable.
- Low values: The generated QR code will be more creative.
### Tips
- For an optimally readable output, try generating multiple QR codes with similar parameters, then choose the best ones.
- Use the Image-to-Image feature to improve the readability of a generated QR code:
- Decrease the denoising strength to retain more of the original image.
- Increase the controlnet guidance scale value for better readability.
A typical workflow for "saving" a code would be :
Max out the guidance scale and minimize the denoising strength, then bump the strength until the code scans.
## Example Outputs
Here are some examples of creative, yet scannable QR codes produced by our model:



Feel free to experiment with prompts, parameters, and the Image-to-Image feature to achieve the desired QR code output. Good luck and have fun!
|
NasimB/cbt-mod-formatting-noem-rarity-log-rarity
|
NasimB
| 2023-07-21T11:18:10Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T08:09:28Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-mod-formatting-noem-rarity-log-rarity
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-mod-formatting-noem-rarity-log-rarity
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3453 | 0.29 | 500 | 5.3477 |
| 5.0323 | 0.58 | 1000 | 4.9403 |
| 4.7055 | 0.87 | 1500 | 4.6938 |
| 4.4394 | 1.17 | 2000 | 4.5543 |
| 4.2985 | 1.46 | 2500 | 4.4393 |
| 4.1999 | 1.75 | 3000 | 4.3368 |
| 4.0751 | 2.04 | 3500 | 4.2617 |
| 3.8966 | 2.33 | 4000 | 4.2210 |
| 3.866 | 2.62 | 4500 | 4.1640 |
| 3.8246 | 2.92 | 5000 | 4.1122 |
| 3.642 | 3.21 | 5500 | 4.1066 |
| 3.581 | 3.5 | 6000 | 4.0790 |
| 3.5663 | 3.79 | 6500 | 4.0482 |
| 3.484 | 4.08 | 7000 | 4.0436 |
| 3.3128 | 4.37 | 7500 | 4.0395 |
| 3.3126 | 4.66 | 8000 | 4.0255 |
| 3.2976 | 4.96 | 8500 | 4.0148 |
| 3.1535 | 5.25 | 9000 | 4.0252 |
| 3.1321 | 5.54 | 9500 | 4.0246 |
| 3.1277 | 5.83 | 10000 | 4.0233 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
TheUpperCaseGuy/finetune-lora-stable-diffusion
|
TheUpperCaseGuy
| 2023-07-21T11:15:12Z | 12 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-21T09:12:24Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - TheUpperCaseGuy/finetune-lora-stable-diffusion
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the TheUpperCaseGuy/saad dataset. You can find some example images in the following.




|
rimonim/distrollbert-cased
|
rimonim
| 2023-07-21T11:09:55Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-21T09:52:37Z |
Distillbert cased model tuned for identifying trolls on Twitter.
|
TinyPixel/xgen-7b-8k-base-bf16-sharded
|
TinyPixel
| 2023-07-21T11:08:46Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T10:57:59Z |
model = "TinyPixel/xgen-7b-8k-base-bf16-sharded"
tokenizer = AutoTokenizer.from_pretrained(model, trust_remote_code=True)
|
1daniar/q-FrozenLake-v1-4x4-noSlippery
|
1daniar
| 2023-07-21T11:08:28Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T11:08:22Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="1daniar/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
firqaaa/indo-biobert-base-uncased
|
firqaaa
| 2023-07-21T11:06:30Z | 189 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
widget:
- text: "Pneumonia adalah penyakit yang disebabkan oleh [MASK]"
---
|
mrvincenzo/ppo-Huggy
|
mrvincenzo
| 2023-07-21T11:06:06Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-21T11:05:56Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mrvincenzo/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mpjuhasz/xlm-roberta-base-finetuned-panx-all
|
mpjuhasz
| 2023-07-21T11:02:07Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-21T10:48:28Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1768
- F1: 0.8529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2988 | 1.0 | 835 | 0.1818 | 0.8221 |
| 0.1575 | 2.0 | 1670 | 0.1727 | 0.8357 |
| 0.1019 | 3.0 | 2505 | 0.1768 | 0.8529 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
steja/starcoder
|
steja
| 2023-07-21T10:53:17Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-07-21T10:53:17Z |
---
license: bigscience-openrail-m
---
|
TheBloke/30B-Epsilon-GGML
|
TheBloke
| 2023-07-21T10:53:05Z | 17 | 9 |
transformers
|
[
"transformers",
"llama",
"alpaca",
"vicuna",
"uncensored",
"cot",
"chain of thought",
"story",
"adventure",
"roleplay",
"rp",
"merge",
"mix",
"instruct",
"wizardlm",
"superhot",
"supercot",
"manticore",
"hippogriff",
"license:other",
"region:us"
] | null | 2023-07-21T09:54:10Z |
---
inference: false
license: other
model_type: llama
tags:
- llama
- alpaca
- vicuna
- uncensored
- cot
- chain of thought
- story
- adventure
- roleplay
- rp
- merge
- mix
- instruct
- wizardlm
- superhot
- supercot
- manticore
- hippogriff
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# CalderaAI's 30B Epsilon GGML
These files are GGML format model files for [CalderaAI's 30B Epsilon](https://huggingface.co/CalderaAI/30B-Epsilon).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with GPU acceleration via the c_transformers backend.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI. Supports full GPU accel on macOS. Also supports Windows, without GPU accel.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Requires extra steps to enable GPU accel via llama.cpp backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/30B-Epsilon-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/30B-Epsilon-GGML)
* [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/CalderaAI/30B-Epsilon)
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: {prompt}
### Response:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| 30b-epsilon.ggmlv3.q2_K.bin | q2_K | 2 | 13.71 GB| 16.21 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| 30b-epsilon.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.28 GB| 19.78 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| 30b-epsilon.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.72 GB| 18.22 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| 30b-epsilon.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 14.06 GB| 16.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| 30b-epsilon.ggmlv3.q4_0.bin | q4_0 | 4 | 18.30 GB| 20.80 GB | Original quant method, 4-bit. |
| 30b-epsilon.ggmlv3.q4_1.bin | q4_1 | 4 | 20.33 GB| 22.83 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| 30b-epsilon.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.62 GB| 22.12 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| 30b-epsilon.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.36 GB| 20.86 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| 30b-epsilon.ggmlv3.q5_0.bin | q5_0 | 5 | 22.37 GB| 24.87 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| 30b-epsilon.ggmlv3.q5_1.bin | q5_1 | 5 | 24.40 GB| 26.90 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| 30b-epsilon.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.05 GB| 25.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| 30b-epsilon.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.40 GB| 24.90 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| 30b-epsilon.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB| 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
| 30b-epsilon.ggmlv3.q8_0.bin | q8_0 | 8 | 34.56 GB| 37.06 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m 30b-epsilon.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Slarti, Chadd, John Detwiler, Pieter, zynix, K, Mano Prime, ReadyPlayerEmma, Ai Maven, Leonard Tan, Edmond Seymore, Joseph William Delisle, Luke @flexchar, Fred von Graf, Viktor Bowallius, Rishabh Srivastava, Nikolai Manek, Matthew Berman, Johann-Peter Hartmann, ya boyyy, Greatston Gnanesh, Femi Adebogun, Talal Aujan, Jonathan Leane, terasurfer, David Flickinger, William Sang, Ajan Kanaga, Vadim, Artur Olbinski, Raven Klaugh, Michael Levine, Oscar Rangel, Randy H, Cory Kujawski, RoA, Dave, Alex, Alexandros Triantafyllidis, Fen Risland, Eugene Pentland, vamX, Elle, Nathan LeClaire, Khalefa Al-Ahmad, Rainer Wilmers, subjectnull, Junyu Yang, Daniel P. Andersen, SuperWojo, LangChain4j, Mandus, Kalila, Illia Dulskyi, Trenton Dambrowitz, Asp the Wyvern, Derek Yates, Jeffrey Morgan, Deep Realms, Imad Khwaja, Pyrater, Preetika Verma, biorpg, Gabriel Tamborski, Stephen Murray, Spiking Neurons AB, Iucharbius, Chris Smitley, Willem Michiel, Luke Pendergrass, Sebastain Graf, senxiiz, Will Dee, Space Cruiser, Karl Bernard, Clay Pascal, Lone Striker, transmissions 11, webtim, WelcomeToTheClub, Sam, theTransient, Pierre Kircher, chris gileta, John Villwock, Sean Connelly, Willian Hasse
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: CalderaAI's 30B Epsilon
## 30B-Epsilon
Epsilon is an instruct based general purpose model assembled from hand picked models and LoRAs.
There is no censorship and it follows instructions in the Alpaca format. This means you can create
your own rules in the context memory of your inference system of choice [mainly KoboldAI or Text
Generation Webui and chat UIs like SillyTavern and so on].
## Composition:
This model is the result of an experimental use of LoRAs on language models and model merges.
[] = applied as LoRA to a composite model | () = combined as composite models
30B-Epsilon = [SuperCOT[SuperHOT-prototype13b-8192[(wizardlmuncensored+((hippogriff+manticore)+(StoryV2))]
Alpaca's instruct format can be used to do many things, including control of the terms of behavior
between a user and a response from an agent in chat. Below is an example of a command injected into
memory.
```
### Instruction:
Make Narrator function as a text based adventure game that responds with verbose, detailed, and creative descriptions of what happens next after Player's response.
Make Player function as the player input for Narrator's text based adventure game, controlling a character named (insert character name here, their short bio, and
whatever quest or other information to keep consistent in the interaction).
### Response:
{an empty new line here}
```
All datasets from all models and LoRAs used were documented and reviewed as model candidates for merging.
Model candidates were based on five core principles: creativity, logic, inference, instruction following,
and longevity of trained responses. SuperHOT-prototype30b-8192 was used in this mix, not the 8K version;
the prototype LoRA seems to have been removed [from HF] as of this writing. The GPT4Alpaca LoRA from
Chansung was removed from this amalgam following a thorough review of where censorship and railroading
the user came from in 33B-Lazarus. This is not a reflection of ChanSung's excellent work - it merely did
not fit the purpose of this model.
## Language Models and LoRAs Used Credits:
manticore-30b-chat-pyg-alpha [Epoch0.4] by openaccess-ai-collective
https://huggingface.co/openaccess-ai-collective/manticore-30b-chat-pyg-alpha
hippogriff-30b-chat by openaccess-ai-collective
https://huggingface.co/openaccess-ai-collective/hippogriff-30b-chat
WizardLM-33B-V1.0-Uncensored by ehartford
https://huggingface.co/ehartford/WizardLM-33B-V1.0-Uncensored
Storytelling-LLaMa-LoRA [30B, Version 2] by GamerUnTouch
https://huggingface.co/GamerUntouch/Storytelling-LLaMa-LoRAs
SuperCOT-LoRA [30B] by kaiokendev
https://huggingface.co/kaiokendev/SuperCOT-LoRA
SuperHOT-LoRA-prototype30b-8192 [30b, not 8K version, but a removed prototype] by kaiokendev
https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test [Similar LoRA to one since removed that was used in making this model.]
Also thanks to Meta for LLaMA and to each and every one of you
who developed these fine-tunes and LoRAs.
|
fireballoon/baichuan-vicuna-chinese-7b
|
fireballoon
| 2023-07-21T10:40:38Z | 1,552 | 62 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"dataset:QingyiSi/Alpaca-CoT",
"dataset:mhhmm/leetcode-solutions-python",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-18T20:43:41Z |
---
language:
- zh
- en
pipeline_tag: text-generation
inference: false
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
- QingyiSi/Alpaca-CoT
- mhhmm/leetcode-solutions-python
---
# baichuan-vicuna-chinese-7b
baichuan-vicuna-chinese-7b是在**中英双语**sharegpt数据上全参数微调的对话模型。
- 基座模型:[baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B),在1.2T tokens上预训练的中英双语模型
- 微调数据:[ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/ShareGPT_V3_unfiltered_cleaned_split.json), [ShareGPT-ZH](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/tree/main/Chinese-instruction-collection), [COT & COT-ZH](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/tree/main/Chain-of-Thought), [Leetcode](https://www.kaggle.com/datasets/erichartford/leetcode-solutions), [dummy](https://github.com/lm-sys/FastChat)
- 训练代码:基于[FastChat](https://github.com/lm-sys/FastChat)
baichuan-vicuna-chinese-7b is a chat model supervised finetuned on vicuna sharegpt data in both **English** and **Chinese**.
- Foundation model: [baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B), a commercially available language model pre-trained on a 1.2T Chinese-English bilingual corpus.
- Finetuning data: [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/ShareGPT_V3_unfiltered_cleaned_split.json), [ShareGPT-ZH](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/tree/main/Chinese-instruction-collection), [COT & COT-ZH](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/tree/main/Chain-of-Thought), [Leetcode](https://www.kaggle.com/datasets/erichartford/leetcode-solutions), [dummy](https://github.com/lm-sys/FastChat)
- Training code: based on [FastChat](https://github.com/lm-sys/FastChat)
**[NEW]** 4bit-128g GPTQ量化版本:[baichuan-vicuna-chinese-7b-gptq](https://huggingface.co/fireballoon/baichuan-vicuna-chinese-7b-gptq)
# Training config
```
{batch_size: 256, epoch: 3, learning_rate: 2e-5, context_length: 4096, deepspeed_zero: 3, mixed_precision: bf16, gradient_clipping: 1.0}
```
# Inference
Inference with [FastChat](https://github.com/lm-sys/FastChat):
```
python3 -m fastchat.serve.cli --model-path fireballoon/baichuan-vicuna-chinese-7b
```
Inference with Transformers:
```ipython
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
>>> tokenizer = AutoTokenizer.from_pretrained("fireballoon/baichuan-vicuna-chinese-7b", use_fast=False)
>>> model = AutoModelForCausalLM.from_pretrained("fireballoon/baichuan-vicuna-chinese-7b").half().cuda()
>>> streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
>>> instruction = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {} ASSISTANT:"
>>> prompt = instruction.format("How can I improve my time management skills?") # user message
>>> generate_ids = model.generate(tokenizer(prompt, return_tensors='pt').input_ids.cuda(), max_new_tokens=2048, streamer=streamer)
'''
Improving time management skills can help you to be more productive and efficient with your time. Here are some tips to help you improve your time management skills:
1. Prioritize tasks: Make a list of all the tasks you need to complete and prioritize them based on their importance and urgency. This will help you to focus on the most important tasks first and avoid getting overwhelmed.
2. Use a calendar or planner: Use a calendar or planner to schedule your tasks and appointments. This will help you to stay organized and ensure that you don't miss any important events or deadlines.
3. Limit distractions: Identify and eliminate distractions, such as social media notifications or email notifications, that can keep you from focusing on your tasks. Consider using tools like browser extensions or apps to block distracting websites.
4. Take breaks: Take regular breaks throughout the day to recharge and refocus. This will help you to stay productive and avoid burnout.
5. Learn to say no: Don't be afraid to say no to requests or invitations that don't align with your goals or values. This will help you to focus on your priorities and avoid overcommitting yourself.
6. Learn to delegate: If possible, delegate tasks to others to free up your own time and focus on more important tasks.
7. Learn to be flexible: Be prepared to adjust your plans and priorities as needed. Things don't always go according to plan, and it's okay to adjust your schedule to accommodate unexpected events or challenges.
Remember that improving your time management skills takes practice and patience. Be kind to yourself and don't be too hard on yourself if you don't get everything done perfectly right away.
'''
```
Gradio multi-turn dialogue example: https://huggingface.co/fireballoon/baichuan-vicuna-chinese-7b/blob/main/app.py
```
python app.py
```
# Test cases on [Ziya-Eval-Chinese](https://huggingface.co/datasets/IDEA-CCNL/Ziya-Eval-Chinese)
```
Q: 小明有200元,买书花了66元,买文具花了12元,小明剩下的钱比原来少了多少元?
A: 小明剩下的钱比原来少了66+12=78元。
Q: 请翻译这段英文:Timmy was fined a large sum of money by the judge for drunken driving·
A: Timmy 因醉驾被处以高额罚金
Q: 贝多芬和柴可夫斯基在风格上有什么差异吗
A: 贝多芬和柴可夫斯基都是古典音乐历史上最重要的作曲家之一,他们的风格在某些方面存在差异。贝多芬的作品通常以严肃和内在的特质为特点,他的音乐常常呈现出深沉而丰富的情感。他的旋律通常是简单而优美的,强调和声变化和音乐构成。柴可夫斯基的作品则常常呈现出更为华丽和梦幻的特质,他的音乐常常具有戏剧性和歌唱性。柴可夫斯基的旋律常常更加复杂,强调和声和旋律的对比,音乐常常充满色彩和装饰。总的来说,贝多芬和柴可夫斯基的音乐风格在内省性和歌唱性方面存在着明显的差异,但两位作曲家在古典音乐史上都具有重要地位,他们的作品仍然被广泛演奏和赞扬。
```
```
Q: 写一篇高中青涩的爱情故事,男主是“我”,女主叫小美。我在高一喜欢上了小美。 开始写吧
A: 我叫陈默,是一个高一新生,在开学的第一天,我就对一个女孩一见钟情,她叫小美,是我班上的同学。那是一个阳光明媚的上午,我正在教室外走廊上,看着同学们有说有笑地走进教室,突然一个女孩吸引了我的目光。她穿着一条白色的连衣裙,在阳光下闪闪发光,她好像在看着我,我也对她微笑了一下,就这样,我们互相认识了。
接下来的日子里,我总是忍不住偷偷地看她,上课的时候,我总是不由自主地想到她,我甚至开始期待着每一次能和她见面。我开始给她写情书,每天放学后都会给她写一封信,表达我的思念和爱意。
但是,小美似乎并不怎么在意我,她总是跟她的好朋友有说有笑,好像对我置之不理。我感到很失落,但还是忍不住想要接近她。我会在课间的时候偷偷地看着她,在食堂吃饭的时候假装不经意地碰到她,在路上遇到她时跟她打招呼。
就这样,我坚持着每天给她写信,表达我的思念和爱意,直到高二的时候,小美终于给了我回应。我们开始一起上下学,一起吃午饭,一起做很多事情。我终于得到了她的爱,我感觉自己是世界上最幸福的人。
然而,我们的幸福并没有持续多久。高三的时候,小美开始变得忙碌起来,她要参加很多补习班和考试,我担心她会累垮,于是开始跟她吵架。最终,我们还是分手了,我很难过,感觉自己失去了最爱的人。
现在,我已经毕业多年,每当我看到小美,我还是会想起我们在一起的时光,我知道,那是我一生中最美好的回忆。
```
```
Q: 从零学习编程,请给我一个三个月的学习计划
A: 以下是一个三个月的学习计划,供您参考:
1. 第1-2周:Python基础
* 学习Python基础知识,如变量和数据类型、循环、条件语句、函数和方法等。
* 练习编程基础练习,如从键盘读取和写入数据、控制流语句和函数调用等。
1. 第3-4周:Python面向对象编程
* 学习Python中面向对象编程的基础知识,如类、方法、属性和方法、继承和多态等。
* 练习面向对象编程练习,如类创建、类调用和类组合等。
1. 第5-6周:Python高级主题
* 学习Python的高级主题,如异常、内存管理、调试技术和性能优化等。
* 练习高级主题练习,如异常处理、内存管理和性能优化等。
1. 第7-8周:Python Web开发
* 学习Python Web开发的基础知识,如Django框架、Flask框架和HTML、CSS和JavaScript等。
* 练习Web开发练习,如Django或Flask应用程序创建、HTTP请求和响应和用户界面设计等。
1. 第9-10周:项目实践
* 选择一个感兴趣的项目进行实践,如网络爬虫、数据分析、机器学习或Web开发等。
* 完成项目并进行演示和讲解。
1. 第11-12周:复习和强化
* 复习所学知识,并进行强化练习以加深理解。
* 参加考试以评估学习成果。
注意:学习计划仅供参考,您可以根据自己的学习节奏和兴趣进行调整。
```
```
Q: 实现一个Python函数以使用动态编程查找两个输入字符串的最长公共子序列
A: 这里有一个Python函数,使用动态编程查找两个输入字符串的最长公共子序列(LCS):
def lcs(s1, s2):
n = len(s1)
m = len(s2)
dp = [s1[0]]
max_length = 0
for i in range(1, n + 1):
for j in range(1, m + 1):
if s1[i-1] == s2[j-1]:
dp.append(s1[i-1])
max_length = max(max_length, j-i+1)
return max(max_length, len(s1))
该函数使用动态规划来计算LCS。它从第一个字符串中的第一个字符开始,并将其添加到存储子序列的`dp`列表中。然后,它循环遍历两个字符串中的字符,如果它们相匹配,则将相应字符添加到`dp`列表中,并将`max_length`存储为当前字符串的最大长度。
最后,该函数返回`max_length`和两个输入字符串的长度之间的最大值,这表示LCS的长度。请注意,该函数假设输入字符串是字符串,而不是字符数组。如果要将其修改为处理数组,则需要相应地修改该函数。
```
|
YojitShinde/a2c-PandaReachDense-v2
|
YojitShinde
| 2023-07-21T10:38:51Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T10:36:05Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.58 +/- 0.22
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mpjuhasz/xlm-roberta-base-finetuned-panx-fr
|
mpjuhasz
| 2023-07-21T10:38:21Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-21T10:33:34Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: validation
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8463611859838274
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2792
- F1: 0.8464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5766 | 1.0 | 191 | 0.3445 | 0.7611 |
| 0.2638 | 2.0 | 382 | 0.2696 | 0.8355 |
| 0.1752 | 3.0 | 573 | 0.2792 | 0.8464 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Softechlb/Llama_2_13b_NEE
|
Softechlb
| 2023-07-21T10:38:10Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"en",
"arxiv:2307.09288",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-21T07:10:19Z |
---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_prompt: "**Your Hugging Face account email address MUST match the email you provide on the Meta website, or your request will not be approved.**"
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
Claaas/a2c-AntBulletEnv-v0
|
Claaas
| 2023-07-21T10:37:27Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T10:36:22Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1138.21 +/- 206.06
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mpjuhasz/xlm-roberta-base-finetuned-panx-de-fr
|
mpjuhasz
| 2023-07-21T10:29:19Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-21T10:17:19Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1658
- F1: 0.8588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2908 | 1.0 | 715 | 0.1909 | 0.8125 |
| 0.1466 | 2.0 | 1430 | 0.1613 | 0.8492 |
| 0.0945 | 3.0 | 2145 | 0.1658 | 0.8588 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Roy029/mt5_empty_desc_2_5k_tokenizer
|
Roy029
| 2023-07-21T10:28:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-21T03:59:01Z |
7月21日 トークナイザのチェックのため作り直し
|
Vasanth/llama2-7b-finetuned-chatbot
|
Vasanth
| 2023-07-21T10:21:57Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"region:us"
] | null | 2023-07-21T02:05:19Z |
---
tags:
- generated_from_trainer
model-index:
- name: llama2-7b-finetuned-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7b-finetuned-chatbot
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
oleksandrfluxon/mpt-30b-chat-test
|
oleksandrfluxon
| 2023-07-21T09:31:12Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"custom_code",
"arxiv:2205.14135",
"arxiv:2108.12409",
"arxiv:2010.04245",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-20T10:56:58Z |
---
license: cc-by-nc-sa-4.0
datasets:
- camel-ai/code
- ehartford/wizard_vicuna_70k_unfiltered
- anon8231489123/ShareGPT_Vicuna_unfiltered
- teknium1/GPTeacher/roleplay-instruct-v2-final
- teknium1/GPTeacher/codegen-isntruct
- timdettmers/openassistant-guanaco
- camel-ai/math
- project-baize/baize-chatbot/medical_chat_data
- project-baize/baize-chatbot/quora_chat_data
- project-baize/baize-chatbot/stackoverflow_chat_data
- camel-ai/biology
- camel-ai/chemistry
- camel-ai/ai_society
- jondurbin/airoboros-gpt4-1.2
- LongConversations
- camel-ai/physics
tags:
- Composer
- MosaicML
- llm-foundry
inference: false
duplicated_from: mosaicml/mpt-30b-chat
---
# MPT-30B-Chat
MPT-30B-Chat is a chatbot-like model for dialogue generation.
It was built by finetuning [MPT-30B](https://huggingface.co/mosaicml/mpt-30b) on the [ShareGPT-Vicuna](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered), [Camel-AI](https://huggingface.co/camel-ai),
[GPTeacher](https://github.com/teknium1/GPTeacher), [Guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), [Baize](https://github.com/project-baize/baize-chatbot) and some generated datasets.
* License: _CC-By-NC-SA-4.0_ (non-commercial use only)
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-30b-chat)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
June 22, 2023
## Model License
_CC-By-NC-SA-4.0_ (non-commercial use only)
## Documentation
* [Blog post: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Dialogue
**ksreenivasan**:
> I want you to come up with a tweet based on this summary of the article:
>
> We are excited to extend the MosaicML Foundation Series with MPT-30B, a new commercially licensed open-source model that is significantly more powerful, and outperforms the
> original GPT-3. In addition, we are releasing two fine-tuned models, MPT-30B-Instruct and MPT-30B-Chat, that are built on top of MPT-30B and excel at short-form instruction
> following and multi-turn conversations, respectively.
> All MPT-30B models come with special features that differentiate them from other LLMs, including an 8k token context window (which can be further extended via finetuning;
> see MPT-7B-StoryWriter), support for context-length extrapolation via ALiBi, and efficient inference + training performance via FlashAttention. It also has strong coding
> abilities thanks to its pretraining data mixture.
**MPT-30B-Chat**:
> "Revolutionize your AI capabilities with MosaicML's latest addition, MPT-30B! Outperforming GPT-3, this open-source model is significantly more powerful and comes with
> finely-tuned models for exceptional instruction following and multi-turn conversations. Unleash the potential of an 8k token context window, ALiBi support, and FlashAttention.
> Experience the future of AI with MosaicML! #AI #ML #innovation"
**ksreenivasan**:
> Is there anything you want to say to MosaicML about this exciting news?
**MPT-30B-Chat**:
> As your AI marketing assistant, I would say that this is truly exciting news for the AI community! MosaicML's latest addition, MPT-30B, is poised to revolutionize the industry with its powerful open-source capabilities and finely-tuned models for instruction following and multi-turn conversations.
> The special features, including the 8k token context window, ALiBi support, and FlashAttention, are sure to set MosaicML's MPT-30B apart from other LLMs.
> Congratulations to the MosaicML team on this incredible achievement!
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-30b-chat',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-30b-chat'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
The model was trained initially with a sequence length of 2048 with an additional pretraining stage for sequence length adapation up to 8192. However, ALiBi enables users to increase the maximum sequence length even further during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-30b-chat'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the MPT-30B tokenizer which is based on the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer and includes additional padding and eos tokens.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b')
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
with torch.autocast('cuda', dtype=torch.bfloat16):
inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# or using the HF pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 29.95B |
|n_layers | 48 |
| n_heads | 64 |
| d_model | 7168 |
| vocab size | 50432 |
| sequence length | 8192 |
## Data Mix
The model was trained on the following data mix:
| Data Source | Number of Tokens in Source | Proportion |
|-------------|----------------------------|------------|
| Airoboros/GPT4-1.2 | 26.4M | 1.71% |
| Baize | 55.0M | 3.57% |
| Camel | 301M | 19.54% |
| GPTeacher | 7.56M | 0.49% |
| Guanaco | 15.6M | 1.02% |
| LongCoversations | 18.4M | 1.19% |
| ShareGPT | 821M | 53.24% |
| WizardLM | 297M | 19.23% |
"LongConversations" is a GPT3.5/4-generated dataset, details of which will be released at a later date.
### Training Configuration
This model was trained on 64 H100s for about 7.6 hours using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-30B-Chat can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-30B-Chat was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens and the MosaicML NLP team
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-30B: Raising the bar
for open-source foundation models},
year = {2023},
url = {www.mosaicml.com/blog/mpt-30b},
note = {Accessed: 2023-06-22},
urldate = {2023-06-22}
}
```
|
Techdread/llama2-qlora-finetunined-french
|
Techdread
| 2023-07-21T09:08:26Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T09:08:10Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
nanaminanamio/K-ON-RVC-V2
|
nanaminanamio
| 2023-07-21T08:59:47Z | 0 | 0 | null |
[
"audio-to-audio",
"license:cc-by-nc-3.0",
"region:us"
] |
audio-to-audio
| 2023-07-21T08:51:25Z |
---
license: cc-by-nc-3.0
pipeline_tag: audio-to-audio
---
|
NasimB/all-base-norm-rarity-log-rarity-cut-short-728k
|
NasimB
| 2023-07-21T08:49:33Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T05:52:23Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: all-base-norm-rarity-log-rarity-cut-short-728k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-base-norm-rarity-log-rarity-cut-short-728k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8327
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.4769 | 0.32 | 500 | 5.4775 |
| 5.1999 | 0.63 | 1000 | 5.1294 |
| 4.8599 | 0.95 | 1500 | 4.9618 |
| 4.6066 | 1.26 | 2000 | 4.8783 |
| 4.4846 | 1.58 | 2500 | 4.8010 |
| 4.3932 | 1.89 | 3000 | 4.7527 |
| 4.2173 | 2.21 | 3500 | 4.7543 |
| 4.1413 | 2.52 | 4000 | 4.6999 |
| 4.1024 | 2.84 | 4500 | 4.6880 |
| 3.9574 | 3.15 | 5000 | 4.7229 |
| 3.8541 | 3.47 | 5500 | 4.7094 |
| 3.8339 | 3.79 | 6000 | 4.6863 |
| 3.7358 | 4.1 | 6500 | 4.7259 |
| 3.5818 | 4.42 | 7000 | 4.7368 |
| 3.5684 | 4.73 | 7500 | 4.7424 |
| 3.5321 | 5.05 | 8000 | 4.7659 |
| 3.3884 | 5.36 | 8500 | 4.7739 |
| 3.3866 | 5.68 | 9000 | 4.7734 |
| 3.3847 | 5.99 | 9500 | 4.7756 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Aharneish/ppo-Huggy
|
Aharneish
| 2023-07-21T08:32:05Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-21T07:28:12Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Aharneish/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
EllaHong/test3
|
EllaHong
| 2023-07-21T08:31:24Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T07:28:24Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
inmdd/vit-base-beans
|
inmdd
| 2023-07-21T08:28:03Z | 196 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"vision",
"generated_from_trainer",
"dataset:beans",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-21T08:23:38Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: vit-base-beans
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9849624060150376
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0857
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.308 | 1.0 | 130 | 0.2118 | 0.9774 |
| 0.2219 | 2.0 | 260 | 0.1303 | 0.9699 |
| 0.1831 | 3.0 | 390 | 0.1142 | 0.9774 |
| 0.0838 | 4.0 | 520 | 0.1031 | 0.9774 |
| 0.1266 | 5.0 | 650 | 0.0857 | 0.9850 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MilosKosRad/BioNER
|
MilosKosRad
| 2023-07-21T08:27:58Z | 1,092 | 8 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"chemistry",
"biology",
"zero-shot",
"BERT",
"PubMedBERT",
"en",
"dataset:ncbi_disease",
"dataset:bigbio/chemdner",
"dataset:bigbio/n2c2_2018_track2",
"dataset:bigbio/bc5cdr",
"dataset:bigbio/jnlpba",
"arxiv:2305.04928",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-19T11:24:03Z |
---
license: mit
datasets:
- ncbi_disease
- bigbio/chemdner
- bigbio/n2c2_2018_track2
- bigbio/bc5cdr
- bigbio/jnlpba
widget:
- text: Disease<SEP>Patient was diagnosed with liver cancer.
language:
- en
tags:
- chemistry
- biology
- zero-shot
- BERT
- PubMedBERT
metrics:
- accuracy
- recall
- f1
- precision
library_name: transformers
---
# Zero and few shot NER for biomedical texts
## Model description
This model was created during the research collaboration between Bayer Pharma and The Institute for Artificial Intelligence Research and Development of Serbia.
The model is trained on 26 biomedical Named Entity (NE) classes and can perform zero-shot inference. It also can be further fine-tuned for new classes with just few examples (few-shot learning).
For more details about our method please see the paper named ["From Zero to Hero: Harnessing Transformers for Biomedical Named Entity Recognition in Zero- and Few-shot Contexts"](https://arxiv.org/abs/2305.04928). The model corresponds to PubMedBERT-based model, trained with 1 in the first segment (check paper for more details).
Model takes two strings as input. String1 is NE label that is being searched in second string. String2 is short text where one wants to searc for NE (represented by String1).
Model outputs list of ones (corresponding to the found Named Entities) and zeros (corresponding to other non-NE tokens) of the Sring2.
## Example of usage
```python
from transformers import AutoTokenizer
from transformers import BertForTokenClassification
modelname = 'MilosKorsRad/BioNER' # modelpath
tokenizer = AutoTokenizer.from_pretrained(modelname) ## loading the tokenizer of the model
string1 = 'Drug'
string2 = 'No recent antibiotics or other nephrotoxins, and no symptoms of UTI with benign UA.'
encodings = tokenizer(string1, string2, is_split_into_words=False,
padding=True, truncation=True, add_special_tokens=True, return_offsets_mapping=False,
max_length=512, return_tensors='pt')
model0 = BertForTokenClassification.from_pretrained(modelname, num_labels=2)
prediction_logits = model0(**encodings)
print(prediction_logits)
```
## Example of fine-tuning with few-shot learning
In order to fine-tune model with new entity using few-shots, the dataset needs to be transformed to torch.utils.data.Dataset, containing BERT tokens and set of 0s and 1s (1 is where the class is positive and should be predicted as the member of given NE class). After the dataset is created, the following can be done (for more details, please have a look at the code at GitHub - https://github.com/br-ai-ns-institute/Zero-ShotNER):
```python
for i in [train1shot, train10shot, train100shot]:
training_args = TrainingArguments(
output_dir='./Results'+class_unseen+'FewShot'+str(i), # output folder (folder to store the results)
num_train_epochs=10, # number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=16, # batch size for evaluation
weight_decay=0.01, # strength of weight decay
logging_dir='./Logs'+class_unseen+'FewShot'+str(i), # folder to store the logs
save_strategy='epoch',
evaluation_strategy='epoch',
load_best_model_at_end=True
)
model0 = BertForTokenClassification.from_pretrained(model_path, num_labels=2)
trainer = Trainer(
model=model0, # pre-trained model for fine-tuning
args=training_args, # training arguments defined above
train_dataset=train_0shot, # dataset class object for training
eval_dataset=valid_dataset # dataset class object for validation
)
start_time = time.time()
trainer.train()
total_time = time.time()-start_time
model_path = os.path.join('Results', class_unseen, 'FewShot',str(i), 'Model')
os.makedirs(model_path, exist_ok=True)
model.save_pretrained(model_path)
tokenizer_path = os.path.join('Results', class_unseen, 'FewShot', str(i), 'Tokenizer')
os.makedirs(tokenizer_path, exist_ok=True)
tokenizer.save_pretrained(tokenizer_path)
```
## Available classes
The following datasets and entities were used for training and therefore they can be used as label in the first segment (as a first string). Note that multiword string have been merged.
* NCBI
* Specific Disease
* Composite Mention
* Modifier
* Disease Class
* BIORED
* Sequence Variant
* Gene Or Gene Product
* Disease Or Phenotypic Feature
* Chemical Entity
* Cell Line
* Organism Taxon
* CDR
* Disease
* Chemical
* CHEMDNER
* Chemical
* Chemical Family
* JNLPBA
* Protein
* DNA
* Cell Type
* Cell Line
* RNA
* n2c2
* Drug
* Frequency
* Strength
* Dosage
* Form
* Reason
* Route
* ADE
* Duration
On top of this, one can use the model for zero-shot learning with other classes, and also fine-tune it with few examples of other classes.
## Code availibility
Code used for training and testing the model is available at https://github.com/br-ai-ns-institute/Zero-ShotNER
## Citation
If you use this model, or are inspired by it, please cite in your paper the following paper:
Košprdić M.,Prodanović N., Ljajić A., Bašaragin B., Milošević N., 2023. From Zero to Hero: Harnessing Transformers for Biomedical Named Entity Recognition in Zero- and Few-shot Contexts. arXiv preprint arXiv:2305.04928. https://arxiv.org/abs/2305.04928
or in bibtex:
```
@misc{kosprdic2023transformerbased,
title={From Zero to Hero: Harnessing Transformers for Biomedical Named Entity Recognition in Zero- and Few-shot Contexts},
author={Miloš Košprdić and Nikola Prodanović and Adela Ljajić and Bojana Bašaragin and Nikola Milošević},
year={2023},
eprint={2305.04928},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
jeremyleejh/ppo-LunarLander-v2
|
jeremyleejh
| 2023-07-21T08:21:43Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-17T14:08:31Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 112.42 +/- 87.78
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Wariano/longformer-base-4096-biomedical-clinical-es-vih
|
Wariano
| 2023-07-21T08:08:39Z | 126 | 0 |
transformers
|
[
"transformers",
"pytorch",
"longformer",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-20T12:29:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: longformer-base-4096-biomedical-clinical-es-vih
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer-base-4096-biomedical-clinical-es-vih
This model is a fine-tuned version of [PlanTL-GOB-ES/longformer-base-4096-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/longformer-base-4096-biomedical-clinical-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3821
- Accuracy: 0.763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6327 | 1.0 | 750 | 0.5347 | 0.724 |
| 0.5336 | 2.0 | 1500 | 0.5497 | 0.7557 |
| 0.4211 | 3.0 | 2250 | 0.9538 | 0.7607 |
| 0.284 | 4.0 | 3000 | 1.0767 | 0.7623 |
| 0.1676 | 5.0 | 3750 | 1.3255 | 0.7603 |
| 0.1106 | 6.0 | 4500 | 1.3821 | 0.763 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
neggles/lora-dump
|
neggles
| 2023-07-21T08:04:29Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-03-22T05:48:00Z |
---
license: cc-by-nc-sa-4.0
---
|
Pravincoder/Loan_Approval_Prediction
|
Pravincoder
| 2023-07-21T07:57:14Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-21T07:55:41Z |
---
license: creativeml-openrail-m
---
|
kyzer0/atha3
|
kyzer0
| 2023-07-21T07:49:34Z | 0 | 0 | null |
[
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-07-21T07:48:33Z |
---
license: bigcode-openrail-m
---
|
NasimB/cbt-raqrity-log-rarity-no-cut
|
NasimB
| 2023-07-21T07:43:56Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T04:35:40Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-raqrity-log-rarity-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-raqrity-log-rarity-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3408 | 0.29 | 500 | 5.3424 |
| 5.03 | 0.58 | 1000 | 4.9271 |
| 4.7051 | 0.87 | 1500 | 4.6877 |
| 4.4398 | 1.17 | 2000 | 4.5455 |
| 4.3008 | 1.46 | 2500 | 4.4279 |
| 4.1949 | 1.75 | 3000 | 4.3274 |
| 4.0682 | 2.04 | 3500 | 4.2525 |
| 3.8858 | 2.33 | 4000 | 4.2063 |
| 3.8689 | 2.62 | 4500 | 4.1532 |
| 3.8239 | 2.91 | 5000 | 4.1073 |
| 3.634 | 3.21 | 5500 | 4.0988 |
| 3.5816 | 3.5 | 6000 | 4.0685 |
| 3.5714 | 3.79 | 6500 | 4.0351 |
| 3.4816 | 4.08 | 7000 | 4.0318 |
| 3.3156 | 4.37 | 7500 | 4.0283 |
| 3.3081 | 4.66 | 8000 | 4.0139 |
| 3.3003 | 4.95 | 8500 | 4.0043 |
| 3.1521 | 5.24 | 9000 | 4.0154 |
| 3.1348 | 5.54 | 9500 | 4.0132 |
| 3.129 | 5.83 | 10000 | 4.0129 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
dchaudhari/my_awesome_qa_model_new
|
dchaudhari
| 2023-07-21T07:43:01Z | 100 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"question-answering",
"generated_from_keras_callback",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-21T06:47:22Z |
---
license: cc-by-4.0
tags:
- generated_from_keras_callback
model-index:
- name: dchaudhari/my_awesome_qa_model_new
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dchaudhari/my_awesome_qa_model_new
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8894
- Validation Loss: 0.9731
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1298, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.4478 | 1.0832 | 0 |
| 0.9814 | 0.9731 | 1 |
| 0.8894 | 0.9731 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
bobobert4/ppo-PyramidsRND
|
bobobert4
| 2023-07-21T07:39:10Z | 16 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-07-21T07:04:37Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: bobobert4/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Amod/falcon7b-mental-health-counseling
|
Amod
| 2023-07-21T07:10:35Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T06:54:37Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
Apocalypse-19/speecht5_finetuned_french
|
Apocalypse-19
| 2023-07-21T07:06:40Z | 158 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"text-to-speech",
"fr",
"dataset:facebook/voxpopuli",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-07-20T14:14:05Z |
---
language:
- fr
base_model: microsoft/speecht5-tts
tags:
- text-to-speech
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5-finetuned-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5-finetuned-fr
This model is a fine-tuned version of [microsoft/speecht5-tts](https://huggingface.co/microsoft/speecht5-tts) on the facebook/voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5147 | 2.42 | 1000 | 0.4753 |
| 0.4932 | 4.84 | 2000 | 0.4629 |
| 0.4926 | 7.26 | 3000 | 0.4566 |
| 0.4907 | 9.69 | 4000 | 0.4542 |
| 0.4839 | 12.11 | 5000 | 0.4532 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
bobobert4/ppo-SnowballTarget
|
bobobert4
| 2023-07-21T07:05:08Z | 41 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-21T04:58:24Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: bobobert4/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
yancongwen/chatglm2-6b-pt-16-1e-2-20230721-3
|
yancongwen
| 2023-07-21T07:02:26Z | 0 | 0 | null |
[
"tensorboard",
"region:us"
] | null | 2023-07-21T06:58:01Z |
# ChatGLM2-6B 微调模型
参考:[ChatGLM2-6B-PT](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning)
## 参数
```sh
PRE_SEQ_LEN=16
LR=1e-2
NUM_GPUS=1
torchrun --standalone --nnodes=1 --nproc-per-node=$NUM_GPUS main.py \
--do_train \
--train_file train_data/train_100k.json \
--validation_file train_data/dev_1k.json \
--preprocessing_num_workers 10 \
--prompt_column question \
--response_column answer \
--overwrite_cache \
--model_name_or_path THUDM/chatglm2-6b \
--output_dir output/chatglm2-6b-pt-$PRE_SEQ_LEN-$LR-20230721-3 \
--overwrite_output_dir \
--max_source_length 256 \
--max_target_length 128 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 16 \
--predict_with_generate \
--max_steps 1000 \
--logging_steps 10 \
--save_steps 1000 \
--learning_rate $LR \
--pre_seq_len $PRE_SEQ_LEN \
--quantization_bit 4
```
## train metrics
```
{
"epoch": 0.41,
"train_loss": 0.17034315645694734,
"train_runtime": 12318.9224,
"train_samples": 78577,
"train_samples_per_second": 2.598,
"train_steps_per_second": 0.081
}
```
---
license: unlicense
---
|
rdpatilds/llma2-7b-tuned-alpaca
|
rdpatilds
| 2023-07-21T06:56:23Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-07-21T04:03:45Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: llma2-7b-tuned-alpaca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llma2-7b-tuned-alpaca
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
0x05a4/DeepRL-QLearning-Tv3
|
0x05a4
| 2023-07-21T06:43:09Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T06:43:07Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: DeepRL-QLearning-Tv3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="0x05a4/DeepRL-QLearning-Tv3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
0x05a4/DeepRL-QLearning-FLv1
|
0x05a4
| 2023-07-21T06:42:51Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T06:42:49Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: DeepRL-QLearning-FLv1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="0x05a4/DeepRL-QLearning-FLv1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
SmilePanda/Langboat_bloom-6b4-zh-instruct_finetune-chat
|
SmilePanda
| 2023-07-21T06:14:36Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"zh",
"dataset:YeungNLP/firefly-train-1.1M",
"dataset:BelleGroup/train_2M_CN",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-19T09:06:10Z |
---
license: bigscience-bloom-rail-1.0
datasets:
- YeungNLP/firefly-train-1.1M
- BelleGroup/train_2M_CN
language:
- zh
---
# Langboat_bloom-6b4-zh-instruct_finetune-chat
是基于Langboat_bloom-6b4-zh模型,在firefly-train-1.1M和Belle-train_2m_cn数据集上采用的QLoRA方法微调的对话模型。
在CEVAL上的评测结果:
| STEM | Social Sciences | Humanities | Others | Average | AVG(Hard) |
|------|-----------------|------------|--------|---------|-----------|
| 27.9 | 27.2 | 24.8 | 26.4 | 26.8 | 28.0 |
# 使用
## 单轮指令生成
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda"
model = AutoModelForCausalLM.from_pretrained("SmilePanda/Langboat_bloom-6b4-zh-instruct_finetune-chat", device_map=device)
tokenizer = AutoTokenizer.from_pretrained("SmilePanda/Langboat_bloom-6b4-zh-instruct_finetune-chat", use_fast=False)
source_prefix = "human"
target_prefix = "assistant"
query = "你好"
sentence = f"{source_prefix}: \n{query}\n\n{target_prefix}: \n"
print("query: ", sentence)
input_ids = tokenizer(sentence, return_tensors='pt').input_ids.to(device)
outputs = model.generate(input_ids=input_ids, max_new_tokens=500,
do_sample=True,
top_p=0.8,
temperature=0.35,
repetition_penalty=1.2,
eos_token_id=tokenizer.eos_token_id)
rets = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0].strip()
response = rets.replace(sentence, "")
print(response)
```
## 多轮对话
```python
import os
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda"
model = AutoModelForCausalLM.from_pretrained("SmilePanda/Langboat_bloom-6b4-zh-instruct_finetune-chat", device_map=device)
tokenizer = AutoTokenizer.from_pretrained("SmilePanda/Langboat_bloom-6b4-zh-instruct_finetune-chat", use_fast=False)
source_prefix = "human"
target_prefix = "assistant"
history = ""
while True:
query = input("user: ").strip()
if not query:
continue
if query == 'q' or query == 'stop':
break
if history:
sentence = history + f"\n{source_prefix}: \n{query}\n\n{target_prefix}: \n"
else:
sentence = f"{source_prefix}: \n{query}\n\n{target_prefix}: \n"
input_ids = tokenizer(sentence, return_tensors='pt').input_ids.to(device)
outputs = model.generate(input_ids=input_ids, max_new_tokens=1024,
do_sample=True,
top_p=0.90,
temperature=0.1,
repetition_penalty=1.0,
eos_token_id=tokenizer.eos_token_id)
rets = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0].strip()
print("bloom: {}".format(rets.replace(sentence, "")))
history = rets
```
|
Mustafaa4a/ASR-Somali
|
Mustafaa4a
| 2023-07-21T06:08:44Z | 38 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-20T20:12:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: ASR-Somali
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASR-Somali
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3660
- Wer: 0.3060
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1435 | 2.09 | 400 | 0.7624 | 0.7706 |
| 0.5829 | 4.18 | 800 | 0.3646 | 0.3935 |
| 0.3634 | 6.27 | 1200 | 0.3318 | 0.3944 |
| 0.2942 | 8.36 | 1600 | 0.3148 | 0.3403 |
| 0.2419 | 10.44 | 2000 | 0.3000 | 0.3255 |
| 0.2104 | 12.53 | 2400 | 0.2951 | 0.3312 |
| 0.1864 | 14.62 | 2800 | 0.3296 | 0.3083 |
| 0.1666 | 16.71 | 3200 | 0.3264 | 0.3153 |
| 0.148 | 18.8 | 3600 | 0.3188 | 0.3028 |
| 0.1305 | 20.89 | 4000 | 0.3448 | 0.3002 |
| 0.1206 | 22.98 | 4400 | 0.3660 | 0.3060 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 1.18.3
- Tokenizers 0.13.3
|
Shojint/my_awesome_food_model
|
Shojint
| 2023-07-21T05:44:59Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:food101",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-21T02:59:13Z |
---
tags:
- generated_from_trainer
datasets:
- food101
model-index:
- name: my_awesome_food_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model was trained from scratch on the food101 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2752
- eval_accuracy: 0.923
- eval_runtime: 18.6571
- eval_samples_per_second: 53.599
- eval_steps_per_second: 3.377
- epoch: 29.71
- step: 1857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Claaas/q-Taxi-v3
|
Claaas
| 2023-07-21T05:40:37Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T05:40:35Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.78
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Claaas/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
digiplay/SyncMix_v1.5
|
digiplay
| 2023-07-21T05:39:41Z | 345 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-21T04:49:08Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
https://civitai.com/models/94834?modelVersionId=122277
|
Lokeshsoni2801/doc_classification_model_v1
|
Lokeshsoni2801
| 2023-07-21T05:32:02Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-20T18:35:20Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Lokeshsoni2801/doc_classification_model_v1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Lokeshsoni2801/doc_classification_model_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5673
- Validation Loss: 0.6571
- Train Accuracy: 0.7662
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 145, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.4044 | 1.1742 | 0.6766 | 0 |
| 1.0292 | 0.8728 | 0.7015 | 1 |
| 0.7649 | 0.7547 | 0.7413 | 2 |
| 0.6383 | 0.6743 | 0.7761 | 3 |
| 0.5833 | 0.6571 | 0.7662 | 4 |
| 0.5673 | 0.6571 | 0.7662 | 5 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
charlyfive/distilbert-base-uncased-finetuned-emotion
|
charlyfive
| 2023-07-21T05:30:17Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-21T05:16:10Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9258169429798419
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2121
- Accuracy: 0.926
- F1: 0.9258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7858 | 1.0 | 250 | 0.3065 | 0.9105 | 0.9102 |
| 0.2391 | 2.0 | 500 | 0.2121 | 0.926 | 0.9258 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
panda1835/Shelly
|
panda1835
| 2023-07-21T05:30:17Z | 0 | 0 | null |
[
"animal",
"sea-turtle",
"object-detection",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-07-20T12:53:18Z |
---
license: apache-2.0
language:
- en
pipeline_tag: object-detection
tags:
- animal
- sea-turtle
---
# Model Card for Model ID
## Model Details
### Model Description
The Shelly - Sea Turtle Object Detection Model is an advanced artificial intelligence (AI) system designed to detect and localize sea turtles in images and video frames. Developed using state-of-the-art deep learning techniques, this model showcases high accuracy and efficiency, enabling real-time detection of sea turtles in their natural habitats
- **Developed by:** Phuc Le
## Uses
The primary purpose of the Sea Turtle Object Detection Model is to aid researchers, conservationists, and marine biologists in monitoring and studying sea turtle populations. It serves as a valuable tool to identify and count sea turtles from aerial surveys, underwater cameras, or other forms of media, assisting in conservation efforts and understanding the ecological dynamics of sea turtles in various marine environments.
|
nebulae7/four
|
nebulae7
| 2023-07-21T05:30:09Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T04:51:02Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
Mikael110/llama-2-70b-guanaco-qlora
|
Mikael110
| 2023-07-21T05:25:09Z | 0 | 19 | null |
[
"llama-2",
"text-classification",
"en",
"region:us"
] |
text-classification
| 2023-07-21T04:38:52Z |
---
language:
- en
pipeline_tag: text-classification
tags:
- llama-2
---
This is a Llama-2 version of [Guanaco](https://huggingface.co/timdettmers/guanaco-65b). It was finetuned from the base [Llama-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) model using the official training scripts found in the [QLoRA repo](https://github.com/artidoro/qlora). I wanted it to be as faithful as possible and therefore changed nothing in the training script beyond the model it was pointing to. The model prompt is therefore also the same as the original Guanaco model.
This repo contains the QLoRA adapter.
A 7b version of the adapter can be found [here](https://huggingface.co/Mikael110/llama-2-7b-guanaco-qlora).
A 13b version of the adapter can be found [here](https://huggingface.co/Mikael110/llama-2-13b-guanaco-qlora).
**Legal Disclaimer: This model is bound by the usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.**
|
ddoc/dt
|
ddoc
| 2023-07-21T05:20:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-21T05:19:27Z |
# !After Detailer
!After Detailer is a extension for stable diffusion webui, similar to Detection Detailer, except it uses ultralytics instead of the mmdet.
## Install
(from Mikubill/sd-webui-controlnet)
1. Open "Extensions" tab.
2. Open "Install from URL" tab in the tab.
3. Enter `https://github.com/Bing-su/adetailer.git` to "URL for extension's git repository".
4. Press "Install" button.
5. Wait 5 seconds, and you will see the message "Installed into stable-diffusion-webui\extensions\adetailer. Use Installed tab to restart".
6. Go to "Installed" tab, click "Check for updates", and then click "Apply and restart UI". (The next time you can also use this method to update extensions.)
7. Completely restart A1111 webui including your terminal. (If you do not know what is a "terminal", you can reboot your computer: turn your computer off and turn it on again.)
You can now install it directly from the Extensions tab.

You **DON'T** need to download any model from huggingface.
## Options
| Model, Prompts | | |
| --------------------------------- | ------------------------------------- | ------------------------------------------------- |
| ADetailer model | Determine what to detect. | `None` = disable |
| ADetailer prompt, negative prompt | Prompts and negative prompts to apply | If left blank, it will use the same as the input. |
| Detection | | |
| ------------------------------------ | -------------------------------------------------------------------------------------------- | --- |
| Detection model confidence threshold | Only objects with a detection model confidence above this threshold are used for inpainting. | |
| Mask min/max ratio | Only use masks whose area is between those ratios for the area of the entire image. | |
If you want to exclude objects in the background, try setting the min ratio to around `0.01`.
| Mask Preprocessing | | |
| ------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
| Mask x, y offset | Moves the mask horizontally and vertically by | |
| Mask erosion (-) / dilation (+) | Enlarge or reduce the detected mask. | [opencv example](https://docs.opencv.org/4.7.0/db/df6/tutorial_erosion_dilatation.html) |
| Mask merge mode | `None`: Inpaint each mask<br/>`Merge`: Merge all masks and inpaint<br/>`Merge and Invert`: Merge all masks and Invert, then inpaint | |
Applied in this order: x, y offset → erosion/dilation → merge/invert.
#### Inpainting

Each option corresponds to a corresponding option on the inpaint tab.
## ControlNet Inpainting
You can use the ControlNet extension if you have ControlNet installed and ControlNet models.
Support `inpaint, scribble, lineart, openpose, tile` controlnet models. Once you choose a model, the preprocessor is set automatically.
## Model
| Model | Target | mAP 50 | mAP 50-95 |
| --------------------- | --------------------- | ----------------------------- | ----------------------------- |
| face_yolov8n.pt | 2D / realistic face | 0.660 | 0.366 |
| face_yolov8s.pt | 2D / realistic face | 0.713 | 0.404 |
| hand_yolov8n.pt | 2D / realistic hand | 0.767 | 0.505 |
| person_yolov8n-seg.pt | 2D / realistic person | 0.782 (bbox)<br/>0.761 (mask) | 0.555 (bbox)<br/>0.460 (mask) |
| person_yolov8s-seg.pt | 2D / realistic person | 0.824 (bbox)<br/>0.809 (mask) | 0.605 (bbox)<br/>0.508 (mask) |
| mediapipe_face_full | realistic face | - | - |
| mediapipe_face_short | realistic face | - | - |
| mediapipe_face_mesh | realistic face | - | - |
The yolo models can be found on huggingface [Bingsu/adetailer](https://huggingface.co/Bingsu/adetailer).
### User Model
Put your [ultralytics](https://github.com/ultralytics/ultralytics) model in `webui/models/adetailer`. The model name should end with `.pt` or `.pth`.
It must be a bbox detection or segment model and use all label.
### Dataset
Datasets used for training the yolo models are:
#### Face
- [Anime Face CreateML](https://universe.roboflow.com/my-workspace-mph8o/anime-face-createml)
- [xml2txt](https://universe.roboflow.com/0oooooo0/xml2txt-njqx1)
- [AN](https://universe.roboflow.com/sed-b8vkf/an-lfg5i)
- [wider face](http://shuoyang1213.me/WIDERFACE/index.html)
#### Hand
- [AnHDet](https://universe.roboflow.com/1-yshhi/anhdet)
- [hand-detection-fuao9](https://universe.roboflow.com/catwithawand/hand-detection-fuao9)
#### Person
- [coco2017](https://cocodataset.org/#home) (only person)
- [AniSeg](https://github.com/jerryli27/AniSeg)
- [skytnt/anime-segmentation](https://huggingface.co/datasets/skytnt/anime-segmentation)
## Example


[](https://ko-fi.com/F1F1L7V2N)
|
LarryAIDraw/chara_JakuChara_NanamiMinami_v1
|
LarryAIDraw
| 2023-07-21T05:11:44Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-21T04:39:26Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/112933/nanami-minami-or-jaku-chara-tomozaki-kun
|
dnt05/dnt05
|
dnt05
| 2023-07-21T05:10:58Z | 0 | 0 | null |
[
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2023-07-21T05:10:58Z |
---
license: cc-by-nc-nd-4.0
---
|
yihhantan/llama2-qlora-finetunined-french
|
yihhantan
| 2023-07-21T05:05:14Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T05:05:09Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
opengl99/xlm-roberta-base-finetuned-panx-de-fr
|
opengl99
| 2023-07-21T04:52:27Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-21T04:36:39Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2129
- F1: 0.8433
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3549 | 1.0 | 715 | 0.2407 | 0.8008 |
| 0.1874 | 2.0 | 1430 | 0.2093 | 0.8349 |
| 0.1172 | 3.0 | 2145 | 0.2129 | 0.8433 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.13.1
- Datasets 2.12.0
- Tokenizers 0.11.0
|
xiao12346/bloomz_560m_PROMPT_TUNING_CAUSAL_LM
|
xiao12346
| 2023-07-21T04:40:37Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T04:37:23Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
pratikhublikar/my_awesome_billsum_model
|
pratikhublikar
| 2023-07-21T04:37:20Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-20T14:26:05Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1438
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5383
- Rouge1: 0.1438
- Rouge2: 0.0496
- Rougel: 0.1159
- Rougelsum: 0.1158
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8360 | 0.1245 | 0.0346 | 0.103 | 0.103 | 19.0 |
| No log | 2.0 | 124 | 2.6210 | 0.1355 | 0.0455 | 0.1106 | 0.1103 | 19.0 |
| No log | 3.0 | 186 | 2.5557 | 0.1425 | 0.0505 | 0.1153 | 0.1152 | 19.0 |
| No log | 4.0 | 248 | 2.5383 | 0.1438 | 0.0496 | 0.1159 | 0.1158 | 19.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
yancongwen/chatglm2-6b-pt-16-1e-2-20230721-1
|
yancongwen
| 2023-07-21T04:29:32Z | 0 | 0 | null |
[
"tensorboard",
"region:us"
] | null | 2023-07-21T03:21:54Z |
# ChatGLM2-6B 微调模型
参考:[ChatGLM2-6B-PT](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning)
## 参数
```sh
PRE_SEQ_LEN=16
LR=1e-2
NUM_GPUS=1
torchrun --standalone --nnodes=1 --nproc-per-node=$NUM_GPUS main.py \
--do_train \
--train_file train_data/train_100k.json \
--validation_file train_data/dev_1k.json \
--preprocessing_num_workers 10 \
--prompt_column question \
--response_column answer \
--overwrite_cache \
--model_name_or_path THUDM/chatglm2-6b \
--output_dir output/chatglm2-6b-pt-$PRE_SEQ_LEN-$LR-20230721-1 \
--overwrite_output_dir \
--max_source_length 256 \
--max_target_length 128 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps 8 \
--predict_with_generate \
--max_steps 2000 \
--logging_steps 10 \
--save_steps 1000 \
--learning_rate $LR \
--pre_seq_len $PRE_SEQ_LEN \
```
## train metrics
```
{
"epoch": 0.41,
"train_loss": 0.1131925263106823,
"train_runtime": 8987.3957,
"train_samples": 78577,
"train_samples_per_second": 3.561,
"train_steps_per_second": 0.223
}
```
---
license: unlicense
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.