modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 06:26:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 538
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 06:26:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
orrenius/pix_test2
|
orrenius
| 2023-08-03T12:16:09Z | 30 | 0 |
peft
|
[
"peft",
"text-generation",
"en",
"region:us"
] |
text-generation
| 2023-07-27T11:56:43Z |
---
language:
- en
library_name: peft
pipeline_tag: text-generation
---
|
KingKazma/xsum_108_5000000_2500000_validation
|
KingKazma
| 2023-08-03T12:07:26Z | 3 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2023-08-03T12:07:25Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# xsum_108_5000000_2500000_validation
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("KingKazma/xsum_108_5000000_2500000_validation")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 9
* Number of training documents: 11332
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | said - world - first - one - time | 41 | -1_said_world_first_one |
| 0 | said - mr - would - people - also | 813 | 0_said_mr_would_people |
| 1 | win - game - league - club - player | 7931 | 1_win_game_league_club |
| 2 | sport - olympic - race - gold - world | 2105 | 2_sport_olympic_race_gold |
| 3 | round - world - champion - open - golf | 219 | 3_round_world_champion_open |
| 4 | murray - match - tennis - set - number | 70 | 4_murray_match_tennis_set |
| 5 | race - hamilton - f1 - rosberg - mercedes | 60 | 5_race_hamilton_f1_rosberg |
| 6 | yn - ar - ei - yr - wedi | 50 | 6_yn_ar_ei_yr |
| 7 | fight - title - boxing - champion - im | 43 | 7_fight_title_boxing_champion |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: english
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.33
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.31.0
* Numba: 0.57.1
* Plotly: 5.13.1
* Python: 3.10.12
|
arhamk/CartPole-v1
|
arhamk
| 2023-08-03T12:06:59Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T11:12:31Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
rightspeed/policy-hope
|
rightspeed
| 2023-08-03T12:05:48Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T12:05:47Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: policy-hope
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 16.40 +/- 8.16
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AaronBrennan/UniversityOfGalway
|
AaronBrennan
| 2023-08-03T12:02:29Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T12:02:27Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: UniversityOfGalway
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="AaronBrennan/UniversityOfGalway", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Zestor/Llama-2-7b-chat-hf-apex-02082023-1255
|
Zestor
| 2023-08-03T11:49:31Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-02T21:19:58Z |
---
tags:
- text-generation
widget:
- text: >-
Write a salesforce trigger for the Account object that flags low credit
score accounts.
---
# Model Trained on Zestor/apex-code
As required by Meta Llama2 reference to Llama2 license.
https://huggingface.co/meta-llama/Llama-2-7b/blob/main/LICENSE.txt
|
KingKazma/xsum_t5-small_p_tuning_500_10_3000_8_e-1_s6789_v3_l5_v20_manual
|
KingKazma
| 2023-08-03T11:49:22Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T11:49:21Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
maheboob/llama2-qlora-finetunined-french
|
maheboob
| 2023-08-03T11:34:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T11:34:35Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_108_50000_25000_test
|
KingKazma
| 2023-08-03T11:30:26Z | 3 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2023-08-03T11:30:25Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# xsum_108_50000_25000_test
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("KingKazma/xsum_108_50000_25000_test")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 84
* Number of training documents: 11334
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | said - mr - people - would - year | 5 | -1_said_mr_people_would |
| 0 | win - goal - game - league - foul | 4854 | 0_win_goal_game_league |
| 1 | police - court - said - officer - mr | 1637 | 1_police_court_said_officer |
| 2 | labour - party - eu - election - vote | 872 | 2_labour_party_eu_election |
| 3 | health - care - nhs - patient - cancer | 341 | 3_health_care_nhs_patient |
| 4 | olympic - sport - race - gold - medal | 325 | 4_olympic_sport_race_gold |
| 5 | cricket - england - wicket - test - captain | 278 | 5_cricket_england_wicket_test |
| 6 | animal - dog - bird - whale - specie | 206 | 6_animal_dog_bird_whale |
| 7 | bridge - rail - council - said - transport | 199 | 7_bridge_rail_council_said |
| 8 | school - education - student - teacher - university | 193 | 8_school_education_student_teacher |
| 9 | bank - rate - growth - economy - market | 181 | 9_bank_rate_growth_economy |
| 10 | syria - syrian - iraq - iran - force | 145 | 10_syria_syrian_iraq_iran |
| 11 | energy - industry - wind - electricity - company | 119 | 11_energy_industry_wind_electricity |
| 12 | film - actress - star - actor - character | 80 | 12_film_actress_star_actor |
| 13 | president - boko - african - haram - mr | 79 | 13_president_boko_african_haram |
| 14 | fire - blaze - service - smoke - said | 79 | 14_fire_blaze_service_smoke |
| 15 | trump - mr - republican - trumps - president | 75 | 15_trump_mr_republican_trumps |
| 16 | music - album - song - band - singer | 70 | 16_music_album_song_band |
| 17 | race - hamilton - f1 - mercedes - lap | 68 | 17_race_hamilton_f1_mercedes |
| 18 | space - earth - planet - solar - orbit | 61 | 18_space_earth_planet_solar |
| 19 | lifeboat - rnli - beach - coastguard - rescue | 55 | 19_lifeboat_rnli_beach_coastguard |
| 20 | flood - flooding - water - weather - rain | 55 | 20_flood_flooding_water_weather |
| 21 | fight - boxing - champion - joshua - ali | 54 | 21_fight_boxing_champion_joshua |
| 22 | plane - aircraft - flight - passenger - pilot | 54 | 22_plane_aircraft_flight_passenger |
| 23 | earthquake - quake - flood - people - water | 53 | 23_earthquake_quake_flood_people |
| 24 | russian - russia - ukraine - putin - ukrainian | 49 | 24_russian_russia_ukraine_putin |
| 25 | murray - match - wimbledon - tennis - konta | 47 | 25_murray_match_wimbledon_tennis |
| 26 | bitcoin - security - talktalk - data - tor | 44 | 26_bitcoin_security_talktalk_data |
| 27 | round - birdie - bogey - par - shot | 41 | 27_round_birdie_bogey_par |
| 28 | ireland - dup - sinn - northern - party | 39 | 28_ireland_dup_sinn_northern |
| 29 | maduro - venezuela - president - venezuelan - opposition | 36 | 29_maduro_venezuela_president_venezuelan |
| 30 | yn - ar - yr - ei - wedi | 36 | 30_yn_ar_yr_ei |
| 31 | painting - art - gallery - portrait - museum | 34 | 31_painting_art_gallery_portrait |
| 32 | unsupported - updated - bst - playback - media | 33 | 32_unsupported_updated_bst_playback |
| 33 | migrant - eu - asylum - turkey - germany | 31 | 33_migrant_eu_asylum_turkey |
| 34 | stone - cave - discovery - site - tree | 30 | 34_stone_cave_discovery_site |
| 35 | parade - poppy - flag - jesus - statue | 30 | 35_parade_poppy_flag_jesus |
| 36 | drug - cannabis - drugs - heroin - cocaine | 27 | 36_drug_cannabis_drugs_heroin |
| 37 | church - pope - bishop - vatican - cardinal | 27 | 37_church_pope_bishop_vatican |
| 38 | greek - greece - bailout - eurozone - bank | 27 | 38_greek_greece_bailout_eurozone |
| 39 | nama - ireland - northern - cerberus - irish | 26 | 39_nama_ireland_northern_cerberus |
| 40 | prison - prisoner - prisons - justice - turing | 25 | 40_prison_prisoner_prisons_justice |
| 41 | radio - show - bbc - series - programme | 24 | 41_radio_show_bbc_series |
| 42 | fifa - blatter - platini - fifas - football | 23 | 42_fifa_blatter_platini_fifas |
| 43 | tesco - sale - store - supermarket - customer | 23 | 43_tesco_sale_store_supermarket |
| 44 | china - taiwan - chinese - hong - taiwans | 22 | 44_china_taiwan_chinese_hong |
| 45 | afghan - taliban - afghanistan - mansour - mullah | 22 | 45_afghan_taliban_afghanistan_mansour |
| 46 | council - local - funding - government - authority | 22 | 46_council_local_funding_government |
| 47 | nsa - encryption - cia - snowden - us | 21 | 47_nsa_encryption_cia_snowden |
| 48 | ice - glacier - temperature - ocean - climate | 21 | 48_ice_glacier_temperature_ocean |
| 49 | osullivan - world - snooker - beat - champion | 21 | 49_osullivan_world_snooker_beat |
| 50 | book - prize - novel - author - award | 20 | 50_book_prize_novel_author |
| 51 | auschwitz - jews - holocaust - camp - winton | 20 | 51_auschwitz_jews_holocaust_camp |
| 52 | samsung - apple - phone - company - battery | 19 | 52_samsung_apple_phone_company |
| 53 | picture - image - pictures - please - submit | 19 | 53_picture_image_pictures_please |
| 54 | korea - north - korean - missile - koreas | 19 | 54_korea_north_korean_missile |
| 55 | pension - worker - pay - work - hour | 19 | 55_pension_worker_pay_work |
| 56 | pen - fillon - le - macron - mr | 18 | 56_pen_fillon_le_macron |
| 57 | paris - eaw - french - attack - suspect | 18 | 57_paris_eaw_french_attack |
| 58 | content - app - tv - digital - apple | 18 | 58_content_app_tv_digital |
| 59 | israel - israeli - palestinians - palestinian - gaza | 17 | 59_israel_israeli_palestinians_palestinian |
| 60 | housing - affordable - rent - homelessness - government | 17 | 60_housing_affordable_rent_homelessness |
| 61 | prince - queen - birthday - duke - royal | 17 | 61_prince_queen_birthday_duke |
| 62 | australia - australian - asylum - visa - abbott | 15 | 62_australia_australian_asylum_visa |
| 63 | tax - spending - cut - osborne - fiscal | 15 | 63_tax_spending_cut_osborne |
| 64 | updated - 2017 - bst - last - gmt | 14 | 64_updated_2017_bst_last |
| 65 | refugee - uk - child - vulnerable - refugees | 12 | 65_refugee_uk_child_vulnerable |
| 66 | ebola - sierra - leone - outbreak - liberia | 12 | 66_ebola_sierra_leone_outbreak |
| 67 | shah - ahmed - mosque - muslims - prophet | 11 | 67_shah_ahmed_mosque_muslims |
| 68 | broadband - 4g - ee - customer - internet | 11 | 68_broadband_4g_ee_customer |
| 69 | pistorius - steenkamp - toilet - door - reeva | 10 | 69_pistorius_steenkamp_toilet_door |
| 70 | eu - uk - population - migrant - trade | 9 | 70_eu_uk_population_migrant |
| 71 | australia - marriage - turnbull - katter - samesex | 9 | 71_australia_marriage_turnbull_katter |
| 72 | sugar - gin - sabmiller - inbev - ab | 8 | 72_sugar_gin_sabmiller_inbev |
| 73 | suu - kyi - rohingya - rakhine - myanmar | 8 | 73_suu_kyi_rohingya_rakhine |
| 74 | nadeau - field - aircraft - cordon - accidents | 8 | 74_nadeau_field_aircraft_cordon |
| 75 | abortion - ireland - law - unborn - case | 8 | 75_abortion_ireland_law_unborn |
| 76 | homosexuality - tor - homosexual - law - gay | 7 | 76_homosexuality_tor_homosexual_law |
| 77 | castro - cuba - cuban - fidel - havana | 7 | 77_castro_cuba_cuban_fidel |
| 78 | china - samsung - firm - business - cheil | 7 | 78_china_samsung_firm_business |
| 79 | event - festival - technology - campsite - interactive | 6 | 79_event_festival_technology_campsite |
| 80 | vw - volkswagen - production - emission - carmaker | 6 | 80_vw_volkswagen_production_emission |
| 81 | mohammed - gjolla - sheriff - nca - terrorism | 6 | 81_mohammed_gjolla_sheriff_nca |
| 82 | tb - tuberculosis - disease - badger - zoonotic | 5 | 82_tb_tuberculosis_disease_badger |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: english
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.33
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.31.0
* Numba: 0.57.1
* Plotly: 5.13.1
* Python: 3.10.12
|
PengQu/open_llama_7b_v2_vicuna_Chinese
|
PengQu
| 2023-08-03T11:26:22Z | 7 | 4 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"dataset:PengQu/langchain-MRKL-finetune",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-17T11:44:44Z |
---
license: apache-2.0
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
- PengQu/langchain-MRKL-finetune
language:
- zh
- en
---
# open_llama_7b_v2_vicuna_Chinese
open_llama_7b_v2_vicuna_Chinese是在中英双语sharegpt数据上全参数微调的对话模型。
- 基座模型:[open_llama_7b_v2](https://huggingface.co/openlm-research/open_llama_7b_v2), 允许商业使用。
- 微调数据:ShareGPT,ShareGPT-ZH,Langchain-MRKL-finetune
- 训练代码:基于[FastChat](https://github.com/lm-sys/FastChat)
open_llama_7b_v2_vicuna_Chinese is a chat model supervised finetuned on vicuna sharegpt data in both **English** and **Chinese**.
- Foundation model: [open_llama_7b_v2](https://huggingface.co/openlm-research/open_llama_7b_v2), a **commercially available** language model.
- Finetuning data: ShareGPT,ShareGPT-ZH,Langchain-MRKL-finetune
- Training code: based on [FastChat](https://github.com/lm-sys/FastChat)
## Loading the Weights with Hugging Face Transformers
**Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that** [**the auto-converted fast tokenizer sometimes gives incorrect tokenizations**](https://github.com/huggingface/transformers/issues/24233)**.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("PengQu/open_llama_7b_v2_vicuna_Chinese",use_fast=False)
model = AutoModelForCausalLM.from_pretrained("PengQu/open_llama_7b_v2_vicuna_Chinese").to("cuda")
instruction = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {} ASSISTANT:"
prompt = instruction.format('用flask写一个简单的http服务器。')
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(input_ids=input_ids, max_new_tokens=512)
print(tokenizer.decode(generation_output[0],skip_special_tokens=True))
```
输出如下(output as follows):<br>
```
用flask写一个简单的http服务器。
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello():
return 'Hello, World!'
if __name__ == '__main__':
app.run()
这段代码定义了一个Flask应用程序,并为根路径('/')定义了一个路由。当用户在其Web浏览器中导航到该路径时,将调用`hello()`函数,并返回字符串“Hello, World!”。
要运行此代码,您需要在计算机上安装Flask。您可以使用以下命令使用pip安装它:
pip install Flask
安装Flask后,您可以使用以下命令运行代码:
python app.py
这将启动一个本地开发服务器,您可以使用Web浏览器访问它,方法是导航到`http://localhost:5000/`。
您还可以通过添加其他路由和功能来进一步自定义代码。例如,您可以为不同的端点定义不同的路由,并使用请求数据执行某些操作。您还可以向应用程序添加错误处理和用户身份验证。
```
## Major Improvement
- 基于open_llama_7b_v2训练,完全允许商业使用
- 英语效果与vicuna-7b持平,中文效果好于vicuna-7b
- 编程能力好于vicuna-7b,应该是open_llama_7b_v2用了StarCoder数据集
- 支持langchain-MRKL格式(agent= "zero-shot-react-description")
<br>
- Finetuned on openllama, allowing for commercial purposes.
- Achieves the same level of English performance as vicuna-7b and outperforms vicuna-7b in Chinese performance
- Has better programming ability than vicuna-7b, likely due to the use of the StarCoder dataset in open_llama_7b_v2
- Supports langchain-MRKL format(agent= "zero-shot-react-description").
|
KingKazma/xsum_t5-small_lora_500_10_3000_8_e-1_s6789_v3_l5_r4_manual
|
KingKazma
| 2023-08-03T11:20:22Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T11:20:21Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
MohaK/ppo-LunarLander-v2
|
MohaK
| 2023-08-03T11:12:15Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T11:06:22Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 266.73 +/- 19.93
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mrkushrz/Llama2_PA_E_Commerce_FAQ
|
mrkushrz
| 2023-08-03T10:51:30Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T10:51:13Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
samhog/psychology-llama-rlhf
|
samhog
| 2023-08-03T10:49:38Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-05-30T18:52:50Z |
# Psychology LLaMA RLHF 🦙🙋♂️
This is a LLaMA-7B-based language model trained in the field of psychology using Reinforcement Learning from Human Feedback. To learn more about RLHF, I recommend [this](https://huggingface.co/blog/rlhf) great blogpost on Hugging Face. For some insights in the process of fine-tuning using RLHF, there is a great blogpost, also from Hugging Face, found [here!](https://huggingface.co/blog/stackllama)
**Links**: [Reward model](https://huggingface.co/samhog/RLHF-psychology-alpaca-rm); [Base model](https://huggingface.co/samhog/psychology-llama-merged)
### Background 💡
This model was developed as part of a thesis project in the field of machine learning and psychology. The goal of the thesis was to compare reinforcement learning from human feedback and AI feedback. Evaluation showed that the model performed significantly better (avg. score [out of 4] 2.70) than the base model (1.22), but significantly worse than ChatGPT (3.20). Further, the evaluation found no significant difference between the [RLAIF model](https://huggingface.co/samhog/psychology-llama-rlaif) (2.98). It was trained on a total of 2.000 data points for 4 hours on a single A100 GPU through Google Colab. Even though this model sometimes outputs appropriate answers, it suffers from *The Repetition Problem*.
### Paper 📜
"Comparison Between RLHF and RLAIF in Fine-Tuning a Large Language Model"
The paper can be found [here](https://www.diva-portal.org/smash/record.jsf?dswid=3835&pid=diva2%3A1782683&c=2&searchType=SIMPLE&language=en&query=rlhf&af=%5B%5D&aq=%5B%5B%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&noOfRows=50&sortOrder=author_sort_asc&sortOrder2=title_sort_asc&onlyFullText=false&sf=undergraduate)!
### Usage 🏂
As a base model, it is recommended to use the [samhog/psychology-alpaca-merged](https://huggingface.co/samhog/psychology-alpaca-merged). Note that this combination does produce some answers suffering from the repetition problem, but not as frequently as the [samhog/psychology-llama-merged](https://huggingface.co/samhog/psychology-llama-merged) does.
```
from peft import PeftModel
from transformers import LLaMATokenizer, LLaMAForCausalLM, GenerationConfig
tokenizer = LLaMATokenizer.from_pretrained("decapoda-research/llama-7b-hf")
model = LLaMAForCausalLM.from_pretrained(
"samhog/psychology-alpaca-merged",
load_in_8bit=True,
device_map="auto",
)
model = PeftModel.from_pretrained(model, "samhog/psychology-llama-rlhf")
```
**Authors:**
Samuel Höglund, samhog@kth.se;
Josef Khedri, jkhedri@kth.se
|
KingKazma/xsum_t5-small_lora_500_10_3000_8_e-1_s6789_v3_l5_r2_manual
|
KingKazma
| 2023-08-03T10:48:17Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T10:48:16Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
shtif/Reinforce-Pixelcopter-PLE-v0
|
shtif
| 2023-08-03T10:46:24Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T10:46:20Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 33.80 +/- 28.46
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
weav-geng/llama2-qlora-finetuned-resume-v1
|
weav-geng
| 2023-08-03T10:38:57Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T10:38:55Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
usman0007/sd-600-200-1
|
usman0007
| 2023-08-03T10:23:37Z | 28 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-03T10:20:22Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### SD_600/200/1 Dreambooth model trained by usman0007 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
runningsnake/distilbert-base-uncased-finetuned-squad-d5716d28
|
runningsnake
| 2023-08-03T10:23:23Z | 84 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-03T10:11:05Z |
---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
gpt2go/vicuna-7b-instruct-ft-adapters-physics1.1
|
gpt2go
| 2023-08-03T10:18:14Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T10:18:12Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
TheTravellingEngineer/bloom-1b1-RLHF
|
TheTravellingEngineer
| 2023-08-03T10:16:07Z | 132 | 0 |
transformers
|
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | 2023-08-03T10:03:08Z |
The base model is bigscience/bloom-1b1. It was finetuned using RLHF and the dataset and the model prompt is similar to the original model.
This repo contains the merged fp16 model.
**Legal Disclaimer: This model is bound by the usage restrictions of the original BLOOM model. And comes with no warranty or gurantees of any kind.**
---
- license:
- bloom <br>
- datasets:
- timdettmers/openassistant-guanaco <br>
- language:
- en <br>
- reference: https://github.com/hiyouga/LLaMA-Efficient-Tuning/tree/main
---
|
KingKazma/xsum_t5-small_lora_500_10_3000_8_e-1_s6789_v3_l5_r1_manual
|
KingKazma
| 2023-08-03T10:15:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T10:15:34Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
usman0007/sd-600-200
|
usman0007
| 2023-08-03T10:03:55Z | 28 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-03T10:00:26Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### SD_600/200 Dreambooth model trained by usman0007 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
normanStorm/my_awesome_mc_model
|
normanStorm
| 2023-08-03T10:03:53Z | 84 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"dataset:swag",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2023-08-03T09:58:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: my_awesome_mc_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mc_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8509
- Accuracy: 0.7357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 35 | 1.0527 | 0.65 |
| No log | 2.0 | 70 | 0.8225 | 0.7 |
| No log | 3.0 | 105 | 0.8509 | 0.7357 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
maurope/vit_model
|
maurope
| 2023-08-03T09:58:44Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-03T09:02:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: vit_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9774436090225563
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0821
- Accuracy: 0.9774
## Model description
This model distinguishes between healthy and diseased bean leaves. It can also categorize between two diseases: bean rust and angular leaf spot. Just upload a photo and the model will tell you the probability of these three categories.
# Healty

# Bean Rust

# Angular Leaf Spot

## Intended uses & limitations
Just classifies bean leaves
## Training and evaluation data
The model was trained with the dataset beans: https://huggingface.co/datasets/beans
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1435 | 3.85 | 500 | 0.0821 | 0.9774 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
slaqrichi/llama2-qlora-finetunined-french
|
slaqrichi
| 2023-08-03T09:51:04Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T16:24:31Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
FelixChao/vicuna-7b-instruct-ft-adapters-chemical1.4
|
FelixChao
| 2023-08-03T09:31:33Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T09:31:30Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
yoakim0202/custom-controlnet-kaggle-epoch10
|
yoakim0202
| 2023-08-03T09:13:06Z | 4 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-03T00:04:03Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-yoakim0202/outputs
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
|
Tanmay09516/stable-beluga-7b-qlora-finetuned-science-exam
|
Tanmay09516
| 2023-08-03T09:03:03Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T09:02:57Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
Tanmay09516/llama2-qlora-finetuned-science-exam
|
Tanmay09516
| 2023-08-03T09:02:57Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T09:02:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
BSC-LT/roberta_model_for_anonimization
|
BSC-LT
| 2023-08-03T08:53:19Z | 93 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"es",
"ca",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-03T08:14:55Z |
---
license: mit
language:
- es
- ca
metrics:
- f1
- precision
- recall
pipeline_tag: token-classification
widget:
- text: "Me llamo Alex y vivo en Barcelona"
---
This is a Roberta multilingual (Catalan & Spanish) anonimization model, for use with BSC's AnonymizationPipeline at:
https://github.com/TeMU-BSC/AnonymizationPipeline.
The anonymization pipeline is a library for performing sensitive data identification and ultimately anonymization of the detected data in Spanish and Catalan user generated plain text.
This is model can be used as a standalone model but it is meant to work within the pipeline.
The Roberta model can detect the following entities: ORG, PER, LOC
| Type | Score |
| --- | --- |
| `ENTS_F` | 90.03 |
| `ENTS_P` | 89.7 |
| `ENTS_R` | 90.3 |
|
NasimB/cbt_gutenberg_fixed_log_rarity-mixed-seed
|
NasimB
| 2023-08-03T08:34:09Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-03T06:27:59Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt_gutenberg_fixed_log_rarity-mixed-seed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt_gutenberg_fixed_log_rarity-mixed-seed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3462 | 0.29 | 500 | 5.3473 |
| 5.0335 | 0.58 | 1000 | 4.9245 |
| 4.7147 | 0.87 | 1500 | 4.6935 |
| 4.4438 | 1.17 | 2000 | 4.5472 |
| 4.2996 | 1.46 | 2500 | 4.4324 |
| 4.2067 | 1.75 | 3000 | 4.3324 |
| 4.0832 | 2.04 | 3500 | 4.2565 |
| 3.8934 | 2.33 | 4000 | 4.2183 |
| 3.87 | 2.62 | 4500 | 4.1591 |
| 3.8293 | 2.91 | 5000 | 4.1082 |
| 3.6536 | 3.21 | 5500 | 4.1071 |
| 3.591 | 3.5 | 6000 | 4.0729 |
| 3.5693 | 3.79 | 6500 | 4.0423 |
| 3.4891 | 4.08 | 7000 | 4.0408 |
| 3.3205 | 4.37 | 7500 | 4.0381 |
| 3.3116 | 4.66 | 8000 | 4.0225 |
| 3.31 | 4.95 | 8500 | 4.0096 |
| 3.1591 | 5.24 | 9000 | 4.0226 |
| 3.1418 | 5.54 | 9500 | 4.0215 |
| 3.1378 | 5.83 | 10000 | 4.0211 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
intellya22/new-test-model-rs
|
intellya22
| 2023-08-03T08:26:26Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-08-03T08:21:57Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 119 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 29,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 179,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(3): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Ellbendls/rl_course_vizdoom_health_gathering_supreme
|
Ellbendls
| 2023-08-03T08:24:08Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T07:52:12Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.86 +/- 5.42
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Ellbendls/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Mursel/redpj3B-lora-int8-alpaca
|
Mursel
| 2023-08-03T08:13:26Z | 3 | 0 |
peft
|
[
"peft",
"pytorch",
"gpt_neox",
"8-bit",
"region:us"
] | null | 2023-08-03T07:41:00Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
rdmpage/autotrain-bwpages-start-only-79636141312
|
rdmpage
| 2023-08-03T08:13:13Z | 151 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"swin",
"image-classification",
"autotrain",
"vision",
"dataset:rdmpage/autotrain-data-bwpages-start-only",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-03T08:12:07Z |
---
tags:
- autotrain
- vision
- image-classification
datasets:
- rdmpage/autotrain-data-bwpages-start-only
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.5096220913134444
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 79636141312
- CO2 Emissions (in grams): 0.5096
## Validation Metrics
- Loss: 0.170
- Accuracy: 0.932
- Macro F1: 0.932
- Micro F1: 0.932
- Weighted F1: 0.931
- Macro Precision: 0.946
- Micro Precision: 0.932
- Weighted Precision: 0.936
- Macro Recall: 0.925
- Micro Recall: 0.932
- Weighted Recall: 0.932
|
nichonifroa/distilbert-base-uncased-finetuned-clinc
|
nichonifroa
| 2023-08-03T08:12:35Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-03T07:52:18Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9170967741935484
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7724
- Accuracy: 0.9171
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.289 | 1.0 | 318 | 3.2762 | 0.7261 |
| 2.6092 | 2.0 | 636 | 1.8625 | 0.8384 |
| 1.5337 | 3.0 | 954 | 1.1513 | 0.8987 |
| 1.0043 | 4.0 | 1272 | 0.8540 | 0.9123 |
| 0.7922 | 5.0 | 1590 | 0.7724 | 0.9171 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
castawaypirate/q-Taxi-v3
|
castawaypirate
| 2023-08-03T08:10:42Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T08:10:40Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.78
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="castawaypirate/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Eitanli/distilbert-qa-checkpoint-v2
|
Eitanli
| 2023-08-03T07:59:44Z | 138 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-01T10:12:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-qa-checkpoint-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-qa-checkpoint-v2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3141
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.6564 | 1.0 | 658 | 0.3612 |
| 0.3397 | 2.0 | 1316 | 0.3361 |
| 0.2885 | 3.0 | 1974 | 0.3515 |
| 0.2407 | 4.0 | 2632 | 0.3672 |
| 0.2213 | 5.0 | 3290 | 0.3718 |
| 0.2197 | 6.0 | 3948 | 0.3967 |
| 0.1986 | 7.0 | 4606 | 0.4115 |
| 0.1932 | 8.0 | 5264 | 0.4152 |
| 0.19 | 9.0 | 5922 | 0.4208 |
| 0.1844 | 10.0 | 6580 | 0.4472 |
| 0.1824 | 11.0 | 7238 | 0.4466 |
| 0.1812 | 12.0 | 7896 | 0.2695 |
| 0.0078 | 13.0 | 8554 | 0.2824 |
| 0.0073 | 14.0 | 9212 | 0.2793 |
| 0.0048 | 15.0 | 9870 | 0.3107 |
| 0.0033 | 16.0 | 10528 | 0.3074 |
| 0.0022 | 17.0 | 11186 | 0.3073 |
| 0.0038 | 18.0 | 11844 | 0.3147 |
| 0.0013 | 19.0 | 12502 | 0.3160 |
| 0.0008 | 20.0 | 13160 | 0.3141 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
BabaYaga048/Taxi-v3-1
|
BabaYaga048
| 2023-08-03T07:59:25Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T07:59:22Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="BabaYaga048/Taxi-v3-1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
fabiochiu/dddoge-dog
|
fabiochiu
| 2023-08-03T07:55:13Z | 34 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"stable-diffusion",
"text-to-image",
"diffusion-models-class",
"dreambooth-hackathon",
"animal",
"dataset:fabiochiu/doge",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-12-27T20:11:23Z |
---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- animal
datasets: fabiochiu/doge
widget:
- text: a photo of dddoge dog in the Acropolis
---
# DreamBooth model for the dddoge concept trained by fabiochiu on the fabiochiu/doge dataset.
This is a Stable Diffusion model fine-tuned on the dddoge concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of dddoge dog**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `dog` images for the animal theme.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('fabiochiu/dddoge-dog')
image = pipeline().images[0]
image
```
|
fabiochiu/t5-base-tag-generation
|
fabiochiu
| 2023-08-03T07:55:12Z | 105,949 | 52 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-19T08:45:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-base-tag-generation
results: []
widget:
- text: "Python is a high-level, interpreted, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically-typed and garbage-collected."
example_title: "Programming"
---
# Model description
This model is [t5-base](https://huggingface.co/t5-base) fine-tuned on the [190k Medium Articles](https://www.kaggle.com/datasets/fabiochiusano/medium-articles) dataset for predicting article tags using the article textual content as input. While usually formulated as a multi-label classification problem, this model deals with _tag generation_ as a text2text generation task (inspiration from [text2tags](https://huggingface.co/efederici/text2tags)).
# How to use the model
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import nltk
nltk.download('punkt')
tokenizer = AutoTokenizer.from_pretrained("fabiochiu/t5-base-tag-generation")
model = AutoModelForSeq2SeqLM.from_pretrained("fabiochiu/t5-base-tag-generation")
text = """
Python is a high-level, interpreted, general-purpose programming language. Its
design philosophy emphasizes code readability with the use of significant
indentation. Python is dynamically-typed and garbage-collected.
"""
inputs = tokenizer([text], max_length=512, truncation=True, return_tensors="pt")
output = model.generate(**inputs, num_beams=8, do_sample=True, min_length=10,
max_length=64)
decoded_output = tokenizer.batch_decode(output, skip_special_tokens=True)[0]
tags = list(set(decoded_output.strip().split(", ")))
print(tags)
# ['Programming', 'Code', 'Software Development', 'Programming Languages',
# 'Software', 'Developer', 'Python', 'Software Engineering', 'Science',
# 'Engineering', 'Technology', 'Computer Science', 'Coding', 'Digital', 'Tech',
# 'Python Programming']
```
## Data cleaning
The dataset is composed of Medium articles and their tags. However, each Medium article can have at most five tags, therefore the author needs to choose what he/she believes are the best tags (mainly for SEO-related purposes). This means that an article with the "Python" tag may have not the "Programming Languages" tag, even though the first implies the latter.
To clean the dataset accounting for this problem, a hand-made taxonomy of about 1000 tags was built. Using the taxonomy, the tags of each articles have been augmented (e.g. an article with the "Python" tag will have the "Programming Languages" tag as well, as the taxonomy says that "Python" is part of "Programming Languages"). The taxonomy is not public, if you are interested in it please send an email at chiusanofabio94@gmail.com.
## Training and evaluation data
The model has been trained on a single epoch spanning about 50000 articles, evaluating on 1000 random articles not used during training.
## Evaluation results
- eval_loss: 0.8474
- eval_rouge1: 38.6033
- eval_rouge2: 20.5952
- eval_rougeL: 36.4458
- eval_rougeLsum: 36.3202
- eval_gen_len: 15.257 # average number of generated tokens
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
fabiochiu/t5-small-medium-title-generation
|
fabiochiu
| 2023-08-03T07:55:10Z | 455 | 9 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-05T11:06:46Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: t5-small-medium-title-generation
results: []
widget:
- text: "summarize: Many financial institutions started building conversational AI, prior to the Covid19 pandemic, as part of a digital transformation initiative. These initial solutions were high profile, highly personalized virtual assistants — like the Erica chatbot from Bank of America. As the pandemic hit, the need changed as contact centers were under increased pressures. As Cathal McGloin of ServisBOT explains in 'how it started, and how it is going,' financial institutions were looking for ways to automate solutions to help get back to 'normal' levels of customer service. This resulted in a change from the 'future of conversational AI' to a real tactical assistant that can help in customer service. Haritha Dev of Wells Fargo, saw a similar trend. Banks were originally looking to conversational AI as part of digital transformation to keep up with the times. However, with the pandemic, it has been more about customer retention and customer satisfaction. In addition, new use cases came about as a result of Covid-19 that accelerated adoption of conversational AI. As Vinita Kumar of Deloitte points out, banks were dealing with an influx of calls about new concerns, like questions around the Paycheck Protection Program (PPP) loans. This resulted in an increase in volume, without enough agents to assist customers, and tipped the scale to incorporate conversational AI. When choosing initial use cases to support, financial institutions often start with high volume, low complexity tasks. For example, password resets, checking account balances, or checking the status of a transaction, as Vinita points out. From there, the use cases can evolve as the banks get more mature in developing conversational AI, and as the customers become more engaged with the solutions. Cathal indicates another good way for banks to start is looking at use cases that are a pain point, and also do not require a lot of IT support. Some financial institutions may have a multi-year technology roadmap, which can make it harder to get a new service started. A simple chatbot for document collection in an onboarding process can result in high engagement, and a high return on investment. For example, Cathal has a banking customer that implemented a chatbot to capture a driver’s license to be used in the verification process of adding an additional user to an account — it has over 85% engagement with high satisfaction. An interesting use case Haritha discovered involved educating customers on financial matters. People feel more comfortable asking a chatbot what might be considered a 'dumb' question, as the chatbot is less judgmental. Users can be more ambiguous with their questions as well, not knowing the right words to use, as chatbot can help narrow things down."
example_title: "Banking on Bots"
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Model description
This model is [t5-small](https://huggingface.co/t5-small) fine-tuned on the [190k Medium Articles](https://www.kaggle.com/datasets/fabiochiusano/medium-articles) dataset for predicting article titles using the article textual content as input.
There are two versions of the model:
- [t5-small-medium-title-generation](https://huggingface.co/fabiochiu/t5-small-medium-title-generation): trained from [t5-small](https://huggingface.co/t5-small).
- [t5-base-medium-title-generation](https://huggingface.co/fabiochiu/t5-base-medium-title-generation): trained from [t5-base](https://huggingface.co/t5-base).
Visit the [title-generation space](https://huggingface.co/spaces/fabiochiu/title-generation) to try the model with different text generation parameters.
# How to use the model
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import nltk
nltk.download('punkt')
tokenizer = AutoTokenizer.from_pretrained("fabiochiu/t5-small-medium-title-generation")
model = AutoModelForSeq2SeqLM.from_pretrained("fabiochiu/t5-small-medium-title-generation")
text = """
Many financial institutions started building conversational AI, prior to the Covid19
pandemic, as part of a digital transformation initiative. These initial solutions
were high profile, highly personalized virtual assistants — like the Erica chatbot
from Bank of America. As the pandemic hit, the need changed as contact centers were
under increased pressures. As Cathal McGloin of ServisBOT explains in “how it started,
and how it is going,” financial institutions were looking for ways to automate
solutions to help get back to “normal” levels of customer service. This resulted
in a change from the “future of conversational AI” to a real tactical assistant
that can help in customer service. Haritha Dev of Wells Fargo, saw a similar trend.
Banks were originally looking to conversational AI as part of digital transformation
to keep up with the times. However, with the pandemic, it has been more about
customer retention and customer satisfaction. In addition, new use cases came about
as a result of Covid-19 that accelerated adoption of conversational AI. As Vinita
Kumar of Deloitte points out, banks were dealing with an influx of calls about new
concerns, like questions around the Paycheck Protection Program (PPP) loans. This
resulted in an increase in volume, without enough agents to assist customers, and
tipped the scale to incorporate conversational AI. When choosing initial use cases
to support, financial institutions often start with high volume, low complexity
tasks. For example, password resets, checking account balances, or checking the
status of a transaction, as Vinita points out. From there, the use cases can evolve
as the banks get more mature in developing conversational AI, and as the customers
become more engaged with the solutions. Cathal indicates another good way for banks
to start is looking at use cases that are a pain point, and also do not require a
lot of IT support. Some financial institutions may have a multi-year technology
roadmap, which can make it harder to get a new service started. A simple chatbot
for document collection in an onboarding process can result in high engagement,
and a high return on investment. For example, Cathal has a banking customer that
implemented a chatbot to capture a driver’s license to be used in the verification
process of adding an additional user to an account — it has over 85% engagement
with high satisfaction. An interesting use case Haritha discovered involved
educating customers on financial matters. People feel more comfortable asking a
chatbot what might be considered a “dumb” question, as the chatbot is less judgmental.
Users can be more ambiguous with their questions as well, not knowing the right
words to use, as chatbot can help narrow things down.
"""
inputs = ["summarize: " + text]
inputs = tokenizer(inputs, max_length=max_input_length, truncation=True, return_tensors="pt")
output = model.generate(**inputs, num_beams=8, do_sample=True, min_length=10, max_length=64)
decoded_output = tokenizer.batch_decode(output, skip_special_tokens=True)[0]
predicted_title = nltk.sent_tokenize(decoded_output.strip())[0]
print(predicted_title)
# Conversational AI: The Future of Customer Service
```
## Training and evaluation data
The model has been trained on a single epoch spanning about 16000 articles, evaluating on 1000 random articles not used during training.
### Training results
The model has been evaluated on a random dataset split of 1000 articles not used during training and validation.
- Rouge-1: 27.8%
- Rouge-2: 14.9%
- Rouge-L: 26.9%
- Rouge-Lsum: 26.9%
- Average length of the generated titles: 13 tokens (about 9 English words)
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
hw2942/chinese-bigbird-wwm-base-4096-wallstreetcn-morning-news-market-overview-open-000001SH-v1
|
hw2942
| 2023-08-03T07:51:08Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"big_bird",
"text-classification",
"generated_from_trainer",
"finance",
"zh",
"base_model:Lowin/chinese-bigbird-wwm-base-4096",
"base_model:finetune:Lowin/chinese-bigbird-wwm-base-4096",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-03T07:18:10Z |
---
license: apache-2.0
base_model: Lowin/chinese-bigbird-wwm-base-4096
tags:
- generated_from_trainer
- finance
metrics:
- accuracy
model-index:
- name: >-
chinese-bigbird-wwm-base-4096-wallstreetcn-morning-news-market-overview-open-000001SH-v1
results: []
language:
- zh
widget:
- text: >-
惠誉下调美国3A主权信用评级次日,经济学家看轻评级下调影响,美国7月ADP新增就业超预期爆表。风险情绪被重创,标普、道指、小盘股齐跌约1%,纳指跌超2%创2月以来最差。
美国超导跌近29%。美债发行海啸即将来袭,10年期美债收益率一度创九个月新高,两年期美债收益率跌幅显著收窄。美元转涨刷新三周半高位。
商品普跌。油价跌超2%,美油跌穿80美元整数位。黄金失守1940美元至三周新低。
中国市场方面,美股时段,中概股指跌4%,理想汽车则再创历史新高,离岸人民币一度跌穿7.21元,最深跌270点至一周低位。沪指收跌近1%,医药、银行疲软,超导概念、地产、券商强势。恒指收跌2.47%,南向资金净流入4.02亿港元。
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese-bigbird-wwm-base-4096-wallstreetcn-morning-news-market-overview-open-000001SH-v1
This model is a fine-tuned version of [Lowin/chinese-bigbird-wwm-base-4096](https://huggingface.co/Lowin/chinese-bigbird-wwm-base-4096) on the dataset of Wallstreetcn Morning News Market Overview with overnight index (000001.SH) movement labels.
It achieves the following results on the evaluation set:
- Loss: 0.955115795135498
- Accuracy: 0.6896551724137931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 75 | 0.6903 | 0.5862 |
| No log | 2.0 | 150 | 0.6608 | 0.6552 |
| No log | 3.0 | 225 | 0.9551 | 0.6897 |
| No log | 4.0 | 300 | 1.8075 | 0.5862 |
| No log | 5.0 | 375 | 1.7461 | 0.6552 |
| No log | 6.0 | 450 | 2.2845 | 0.5862 |
| 0.3813 | 7.0 | 525 | 2.2898 | 0.5862 |
| 0.3813 | 8.0 | 600 | 2.0169 | 0.6552 |
| 0.3813 | 9.0 | 675 | 2.2466 | 0.6552 |
| 0.3813 | 10.0 | 750 | 2.2800 | 0.6552 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
blackbingris/321221321232
|
blackbingris
| 2023-08-03T07:46:33Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"license:openrail",
"region:us"
] | null | 2023-07-28T08:30:59Z |
---
license: openrail
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
UniversityOfGalway/ppo-LunarLander-v2
|
UniversityOfGalway
| 2023-08-03T07:46:09Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T07:45:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.44 +/- 19.90
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Josrf/q-FrozenLake-v1-4x4-noSlippery
|
Josrf
| 2023-08-03T07:45:19Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T07:45:17Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Josrf/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
vignesh-trustt/llama-v2-7B
|
vignesh-trustt
| 2023-08-03T07:39:23Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-08-03T07:39:23Z |
---
license: bigscience-openrail-m
---
|
jensg/speecht5_finetuned_voxpopuli_nl
|
jensg
| 2023-08-03T07:37:54Z | 82 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-08-03T06:17:46Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
- text-to-speech
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5233 | 4.3 | 1000 | 0.4784 |
| 0.4927 | 8.61 | 2000 | 0.4652 |
| 0.4958 | 12.91 | 3000 | 0.4628 |
| 0.4916 | 17.21 | 4000 | 0.4600 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
google/ddpm-cifar10-32
|
google
| 2023-08-03T07:24:08Z | 44,317 | 63 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"arxiv:2006.11239",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-06-16T12:53:22Z |
---
license: apache-2.0
tags:
- pytorch
- diffusers
- unconditional-image-generation
---
# Denoising Diffusion Probabilistic Models (DDPM)
**Paper**: [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239)
**Authors**: Jonathan Ho, Ajay Jain, Pieter Abbeel
**Abstract**:
*We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.*
## Inference
**DDPM** models can use *discrete noise schedulers* such as:
- [scheduling_ddpm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py)
- [scheduling_ddim](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py)
- [scheduling_pndm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py)
for inference. Note that while the *ddpm* scheduler yields the highest quality, it also takes the longest.
For a good trade-off between quality and inference speed you might want to consider the *ddim* or *pndm* schedulers instead.
See the following code:
```python
# !pip install diffusers
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
model_id = "google/ddpm-cifar10-32"
# load model and scheduler
ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference
# run pipeline in inference (sample random noise and denoise)
image = ddpm().images[0]
# save image
image.save("ddpm_generated_image.png")
```
For more in-detail information, please have a look at the [official inference example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb)
## Training
If you want to train your own model, please have a look at the [official training example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb)
## Samples
1. 
2. 
3. 
4. 
|
ritvic/model_replacement
|
ritvic
| 2023-08-03T07:22:28Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T07:22:25Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
leahsuperb/ppo-LunarLander-v2
|
leahsuperb
| 2023-08-03T07:21:15Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T07:20:56Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.69 +/- 22.04
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
zohaib99k/Nous-Hermes-Llama2-8bit-GPTQ
|
zohaib99k
| 2023-08-03T07:07:37Z | 5 | 1 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"llama-2",
"self-instruct",
"distillation",
"synthetic instruction",
"en",
"license:other",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-08-03T06:06:58Z |
---
inference: false
language:
- en
license: other
model_type: llama
tags:
- llama-2
- self-instruct
- distillation
- synthetic instruction
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Nous Research's Nous Hermes Llama 2 13B GPTQ
These files are GPTQ model files for [Nous Research's Nous Hermes Llama 2 13B](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-GGML)
* [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: {prompt}
### Response:
```
## Provided files
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
| main | 4 | 128 | False | 7.26 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
| gptq-4bit-32g-actorder_True | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
| gptq-4bit-64g-actorder_True | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| gptq-4bit-128g-actorder_True | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| gptq-8bit-128g-actorder_True | 8 | 128 | True | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
| gptq-8bit-64g-actorder_True | 8 | 64 | True | 13.95 GB | False | AutoGPTQ | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
| gptq-8bit-128g-actorder_False | 8 | 128 | False | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
| gptq-8bit--1g-actorder_True | 8 | None | True | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Nous-Hermes-Llama2-GPTQ:gptq-4bit-32g-actorder_True`
- With Git, you can clone a branch with:
```
git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Nous-Hermes-Llama2-GPTQ`
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Nous-Hermes-Llama2-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Nous-Hermes-Llama2-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done"
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Nous-Hermes-Llama2-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
## How to use this GPTQ model from Python code
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
`GITHUB_ACTIONS=true pip install auto-gptq`
Then try the following example code:
```python
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
model_name_or_path = "TheBloke/Nous-Hermes-Llama2-GPTQ"
model_basename = "gptq_model-4bit-128g"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
"""
To download from a specific branch, use the revision parameter, as in this example:
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
revision="gptq-4bit-32g-actorder_True",
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
quantize_config=None)
"""
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: {prompt}
### Response:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Compatibility
The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Slarti, Chadd, John Detwiler, Pieter, zynix, K, Mano Prime, ReadyPlayerEmma, Ai Maven, Leonard Tan, Edmond Seymore, Joseph William Delisle, Luke @flexchar, Fred von Graf, Viktor Bowallius, Rishabh Srivastava, Nikolai Manek, Matthew Berman, Johann-Peter Hartmann, ya boyyy, Greatston Gnanesh, Femi Adebogun, Talal Aujan, Jonathan Leane, terasurfer, David Flickinger, William Sang, Ajan Kanaga, Vadim, Artur Olbinski, Raven Klaugh, Michael Levine, Oscar Rangel, Randy H, Cory Kujawski, RoA, Dave, Alex, Alexandros Triantafyllidis, Fen Risland, Eugene Pentland, vamX, Elle, Nathan LeClaire, Khalefa Al-Ahmad, Rainer Wilmers, subjectnull, Junyu Yang, Daniel P. Andersen, SuperWojo, LangChain4j, Mandus, Kalila, Illia Dulskyi, Trenton Dambrowitz, Asp the Wyvern, Derek Yates, Jeffrey Morgan, Deep Realms, Imad Khwaja, Pyrater, Preetika Verma, biorpg, Gabriel Tamborski, Stephen Murray, Spiking Neurons AB, Iucharbius, Chris Smitley, Willem Michiel, Luke Pendergrass, Sebastain Graf, senxiiz, Will Dee, Space Cruiser, Karl Bernard, Clay Pascal, Lone Striker, transmissions 11, webtim, WelcomeToTheClub, Sam, theTransient, Pierre Kircher, chris gileta, John Villwock, Sean Connelly, Willian Hasse
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Nous Research's Nous Hermes Llama 2 13B
# Model Card: Nous-Hermes-Llama2-13b
Compute provided by our project sponsor Redmond AI, thank you! Follow RedmondAI on Twitter @RedmondAI.
## Model Description
Nous-Hermes-Llama2-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Emozilla leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors.
This Hermes model uses the exact same dataset as Hermes on Llama-1. This is to ensure consistency between the old Hermes and new, for anyone who wanted to keep Hermes as similar to the old one, just more capable.
This model stands out for its long responses, lower hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 4096 sequence length on an 8x a100 80GB DGX machine.
## Example Outputs:




## Model Training
The model was trained almost entirely on synthetic GPT-4 outputs. Curating high quality GPT-4 datasets enables incredibly high quality in knowledge, task completion, and style.
This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), and several others, detailed further below
## Collaborators
The model fine-tuning and the datasets were a collaboration of efforts and resources between Teknium, Karan4D, Emozilla, Huemin Art, and Redmond AI.
Special mention goes to @winglian for assisting in some of the training issues.
Huge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly.
Among the contributors of datasets:
- GPTeacher was made available by Teknium
- Wizard LM by nlpxucan
- Nous Research Instruct Dataset was provided by Karan4D and HueminArt.
- GPT4-LLM and Unnatural Instructions were provided by Microsoft
- Airoboros dataset by jondurbin
- Camel-AI's domain expert datasets are from Camel-AI
- CodeAlpaca dataset by Sahil 2801.
If anyone was left out, please open a thread in the community tab.
## Prompt Format
The model follows the Alpaca prompt format:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
or
```
### Instruction:
<prompt>
### Input:
<additional context>
### Response:
<leave a newline blank for model to respond>
```
## Benchmark Results
AGI-Eval
```
| Task |Version| Metric |Value | |Stderr|
|agieval_aqua_rat | 0|acc |0.2362|± |0.0267|
| | |acc_norm|0.2480|± |0.0272|
|agieval_logiqa_en | 0|acc |0.3425|± |0.0186|
| | |acc_norm|0.3472|± |0.0187|
|agieval_lsat_ar | 0|acc |0.2522|± |0.0287|
| | |acc_norm|0.2087|± |0.0269|
|agieval_lsat_lr | 0|acc |0.3510|± |0.0212|
| | |acc_norm|0.3627|± |0.0213|
|agieval_lsat_rc | 0|acc |0.4647|± |0.0305|
| | |acc_norm|0.4424|± |0.0303|
|agieval_sat_en | 0|acc |0.6602|± |0.0331|
| | |acc_norm|0.6165|± |0.0340|
|agieval_sat_en_without_passage| 0|acc |0.4320|± |0.0346|
| | |acc_norm|0.4272|± |0.0345|
|agieval_sat_math | 0|acc |0.2909|± |0.0307|
| | |acc_norm|0.2727|± |0.0301|
```
GPT-4All Benchmark Set
```
| Task |Version| Metric |Value | |Stderr|
|arc_challenge| 0|acc |0.5102|± |0.0146|
| | |acc_norm|0.5213|± |0.0146|
|arc_easy | 0|acc |0.7959|± |0.0083|
| | |acc_norm|0.7567|± |0.0088|
|boolq | 1|acc |0.8394|± |0.0064|
|hellaswag | 0|acc |0.6164|± |0.0049|
| | |acc_norm|0.8009|± |0.0040|
|openbookqa | 0|acc |0.3580|± |0.0215|
| | |acc_norm|0.4620|± |0.0223|
|piqa | 0|acc |0.7992|± |0.0093|
| | |acc_norm|0.8069|± |0.0092|
|winogrande | 0|acc |0.7127|± |0.0127|
```
BigBench Reasoning Test
```
| Task |Version| Metric |Value | |Stderr|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5526|± |0.0362|
|bigbench_date_understanding | 0|multiple_choice_grade|0.7344|± |0.0230|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.2636|± |0.0275|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.0195|± |0.0073|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2760|± |0.0200|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2100|± |0.0154|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4400|± |0.0287|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.2440|± |0.0192|
|bigbench_navigate | 0|multiple_choice_grade|0.4950|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.5570|± |0.0111|
|bigbench_ruin_names | 0|multiple_choice_grade|0.3728|± |0.0229|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.1854|± |0.0123|
|bigbench_snarks | 0|multiple_choice_grade|0.6298|± |0.0360|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.6156|± |0.0155|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.3140|± |0.0147|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2032|± |0.0114|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1406|± |0.0083|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4400|± |0.0287|
```
These are the highest benchmarks Hermes has seen on every metric, achieving the following average scores:
- GPT4All benchmark average is now 70.0 - from 68.8 in Hermes-Llama1
- 0.3657 on BigBench, up from 0.328 on hermes-llama1
- 0.372 on AGIEval, up from 0.354 on Hermes-llama1
These benchmarks currently have us at #1 on ARC-c, ARC-e, Hellaswag, and OpenBookQA, and 2nd place on Winogrande, comparing to GPT4all's benchmarking list, supplanting Hermes 1 for the new top position.
## Resources for Applied Use Cases:
For an example of a back and forth chatbot using huggingface transformers and discord, check out: https://github.com/teknium1/alpaca-discord
For an example of a roleplaying discord chatbot, check out this: https://github.com/teknium1/alpaca-roleplay-discordbot
## Future Plans
We plan to continue to iterate on both more high quality data, and new data filtering techniques to eliminate lower quality data going forward.
## Model Usage
The model is available for download on Hugging Face. It is suitable for a wide range of language tasks, from generating creative text to understanding and following complex instructions.
|
polejowska/detr-r50-cd45rb-8ah-6l
|
polejowska
| 2023-08-03T07:07:17Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cd45rb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-06-11T14:58:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cd45rb
model-index:
- name: detr-r50-cd45rb-8ah-6l
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-r50-cd45rb-8ah-6l
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cd45rb dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5794
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.4794 | 1.0 | 4606 | 1.8245 |
| 2.2469 | 2.0 | 9212 | 1.7593 |
| 2.1817 | 3.0 | 13818 | 1.7109 |
| 2.1418 | 4.0 | 18424 | 1.6976 |
| 2.118 | 5.0 | 23030 | 1.6695 |
| 2.1237 | 6.0 | 27636 | 1.6781 |
| 2.1025 | 7.0 | 32242 | 1.6574 |
| 2.0796 | 8.0 | 36848 | 1.6418 |
| 2.0672 | 9.0 | 41454 | 1.6333 |
| 2.0597 | 10.0 | 46060 | 1.6313 |
| 2.0948 | 11.0 | 50666 | 1.6546 |
| 2.0943 | 12.0 | 55272 | 1.6905 |
| 2.0819 | 13.0 | 59878 | 1.6430 |
| 2.0795 | 14.0 | 64484 | 1.6439 |
| 2.0566 | 15.0 | 69090 | 1.6449 |
| 2.0435 | 16.0 | 73696 | 1.6204 |
| 2.0375 | 17.0 | 78302 | 1.6195 |
| 2.032 | 18.0 | 82908 | 1.6128 |
| 2.0079 | 19.0 | 87514 | 1.6082 |
| 1.9985 | 20.0 | 92120 | 1.6037 |
| 1.9976 | 21.0 | 96726 | 1.6005 |
| 1.9887 | 22.0 | 101332 | 1.5969 |
| 1.9841 | 23.0 | 105938 | 1.5841 |
| 1.9763 | 24.0 | 110544 | 1.5826 |
| 1.9686 | 25.0 | 115150 | 1.5794 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
BabaYaga048/q-FrozenLake-v1-4x4-noSlippery
|
BabaYaga048
| 2023-08-03T07:05:08Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T07:05:05Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="BabaYaga048/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
shre-db/distilbert-base-uncased-finetuned-imdb
|
shre-db
| 2023-08-03T07:02:51Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-08-03T06:42:25Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7026 | 1.0 | 157 | 2.4957 |
| 2.581 | 2.0 | 314 | 2.4286 |
| 2.5363 | 3.0 | 471 | 2.4515 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
|
Valent2809/news_classifier_MAndA
|
Valent2809
| 2023-08-03T07:02:01Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-03T03:41:55Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Valent2809/news_classifier_MAndA
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Valent2809/news_classifier_MAndA
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0838
- Validation Loss: 0.0928
- Train Accuracy: 0.9733
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1626, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.1433 | 0.1044 | 0.9726 | 0 |
| 0.0838 | 0.0928 | 0.9733 | 1 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Aspik101/vicuna-13b-v1.5-PL-lora_GGML
|
Aspik101
| 2023-08-03T06:52:00Z | 0 | 0 | null |
[
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"text-generation",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"region:us"
] |
text-generation
| 2023-08-03T06:39:00Z |
---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
|
jordyvl/vit-base_rvl_cdip_symce
|
jordyvl
| 2023-08-03T06:43:14Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-01T15:31:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_rvl_cdip_symce
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_rvl_cdip_symce
This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6253
- Accuracy: 0.8982
- Brier Loss: 0.1796
- Nll: 1.1468
- F1 Micro: 0.8982
- F1 Macro: 0.8984
- Ece: 0.0846
- Aurc: 0.0197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| 0.1665 | 1.0 | 2500 | 0.3898 | 0.8939 | 0.1621 | 1.1704 | 0.8939 | 0.8938 | 0.0463 | 0.0167 |
| 0.1439 | 2.0 | 5000 | 0.3927 | 0.8949 | 0.1602 | 1.1860 | 0.8949 | 0.8954 | 0.0506 | 0.0165 |
| 0.0889 | 3.0 | 7500 | 0.4389 | 0.8941 | 0.1684 | 1.1449 | 0.8941 | 0.8946 | 0.0637 | 0.0172 |
| 0.0574 | 4.0 | 10000 | 0.4870 | 0.8953 | 0.1741 | 1.1605 | 0.8953 | 0.8952 | 0.0719 | 0.0179 |
| 0.0372 | 5.0 | 12500 | 0.5259 | 0.8929 | 0.1792 | 1.1860 | 0.8929 | 0.8935 | 0.0775 | 0.0185 |
| 0.0225 | 6.0 | 15000 | 0.5579 | 0.8959 | 0.1784 | 1.1504 | 0.8959 | 0.8963 | 0.0799 | 0.0196 |
| 0.0126 | 7.0 | 17500 | 0.5905 | 0.8949 | 0.1811 | 1.1714 | 0.8949 | 0.8950 | 0.0836 | 0.0197 |
| 0.0081 | 8.0 | 20000 | 0.6011 | 0.8973 | 0.1791 | 1.1720 | 0.8973 | 0.8975 | 0.0828 | 0.0198 |
| 0.0048 | 9.0 | 22500 | 0.6198 | 0.8975 | 0.1800 | 1.1518 | 0.8975 | 0.8977 | 0.0847 | 0.0198 |
| 0.0038 | 10.0 | 25000 | 0.6253 | 0.8982 | 0.1796 | 1.1468 | 0.8982 | 0.8984 | 0.0846 | 0.0197 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
bh8648/distilbert-base-uncased-finetuned-clinc
|
bh8648
| 2023-08-03T06:37:40Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-01T08:24:47Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9174193548387096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7665
- Accuracy: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2879 | 1.0 | 318 | 3.2752 | 0.7239 |
| 2.6117 | 2.0 | 636 | 1.8616 | 0.8368 |
| 1.5335 | 3.0 | 954 | 1.1454 | 0.8987 |
| 0.9993 | 4.0 | 1272 | 0.8479 | 0.9126 |
| 0.7853 | 5.0 | 1590 | 0.7665 | 0.9174 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.3
|
Doa-doa/llama-2-7b-FT-GCDA-29DAs-300steps
|
Doa-doa
| 2023-08-03T06:32:35Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-02T16:55:35Z |
---
pipeline_tag: text-generation
---
|
Karn07/engilsh_to_hindi_translation
|
Karn07
| 2023-08-03T06:29:10Z | 62 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-31T12:34:28Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: engilsh_to_hindi_translation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# engilsh_to_hindi_translation
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
WeightWatcher/albert-large-v2-rte
|
WeightWatcher
| 2023-08-03T06:15:13Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"en",
"dataset:glue",
"arxiv:1909.11942",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-02T23:00:55Z |
---
language:
- "en"
license: mit
datasets:
- glue
metrics:
- Classification accuracy
---
# Model Card for WeightWatcher/albert-large-v2-rte
This model was finetuned on the GLUE/rte task, based on the pretrained
albert-large-v2 model. Hyperparameters were (largely) taken from the following
publication, with some minor exceptions.
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
## Model Details
### Model Description
- **Developed by:** https://huggingface.co/cdhinrichs
- **Model type:** Text Sequence Classification
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** https://huggingface.co/albert-large-v2
## Uses
Text classification, research and development.
### Out-of-Scope Use
Not intended for production use.
See https://huggingface.co/albert-large-v2
## Bias, Risks, and Limitations
See https://huggingface.co/albert-large-v2
### Recommendations
See https://huggingface.co/albert-large-v2
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AlbertForSequenceClassification
model = AlbertForSequenceClassification.from_pretrained("WeightWatcher/albert-large-v2-rte")
```
## Training Details
### Training Data
See https://huggingface.co/datasets/glue#rte
RTE is a classification task, and a part of the GLUE benchmark.
### Training Procedure
Adam optimization was used on the pretrained ALBERT model at
https://huggingface.co/albert-large-v2.
A checkpoint from MNLI was NOT used, differing from footnote 4 in,
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
#### Training Hyperparameters
Training hyperparameters, (Learning Rate, Batch Size, ALBERT dropout rate,
Classifier Dropout Rate, Warmup Steps, Training Steps,) were taken from Table
A.4 in,
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
Max sequence length (MSL) was set to 128, differing from the above.
## Evaluation
Classification accuracy is used to evaluate model performance.
### Testing Data, Factors & Metrics
#### Testing Data
See https://huggingface.co/datasets/glue#rte
#### Metrics
Classification accuracy
### Results
Training Classification accuracy: 0.9971887550200803
Evaluation Classification accuracy: 0.8014440433212996
## Environmental Impact
The model was finetuned on a single user workstation with a single GPU. CO2
impact is expected to be minimal.
|
LeoMoyu/llama2-qlora-finetunined-french
|
LeoMoyu
| 2023-08-03T05:57:27Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T05:57:11Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
SearchUnify-ML/xgen-7b-8k-open-instruct-gptq
|
SearchUnify-ML
| 2023-08-03T05:43:18Z | 6 | 4 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"en",
"dataset:VMware/open-instruct-v1-oasst-dolly-hhrlhf",
"license:cc",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-07-04T11:54:09Z |
---
license: cc
datasets:
- VMware/open-instruct-v1-oasst-dolly-hhrlhf
language:
- en
pipeline_tag: text-generation
inference: false
---
# SearchUnify/xgen-7b-8k-open-instruct-gptq
With its industry-first robust LLM Integrations across its suite of products ([Cognitive Search](https://www.searchunify.com/products/cognitive-search/?utm_source=link&utm_medium=ml-model&utm_campaign=hugging-face), [SUVA](https://www.searchunify.com/products/suva/), [Knowbler](https://www.searchunify.com/products/knowbler/?utm_source=link&utm_medium=ml-model&utm_campaign=hugging-face), [Escalation Predictor](https://applications.searchunify.com/escalation-predictor?utm_source=link&utm_medium=ml-model&utm_campaign=hugging-face), [Agent Helper](https://applications.searchunify.com/agent-helper?utm_source=link&utm_medium=ml-model&utm_campaign=hugging-face) and [Community Helper](https://applications.searchunify.com/community-helper?utm_source=link&utm_medium=ml-model&utm_campaign=hugging-face)) coupled with the federated retrieval augmented generation (FRAG) architecture, [SearchUnify's unified cognitive platform](https://www.searchunify.com/?utm_source=link&utm_medium=ml-model&utm_campaign=hugging-face) fetches relevant information or responses to deliver more accurate and contextually appropriate support and self-service experiences.
Leveraging the state-of-the-art GPTQ quantization method, SearchUnify optimized the XGen-7B Model for low memory footprint and rapid response generation.
These are GPTQ 4bit model files for [VMWare's XGEN 7B 8K Open Instruct](https://huggingface.co/VMware/xgen-7b-8k-open-instruct). It is the result of quantizing to 4bit using GPTQ-for-LLaMa.
# How to use this GPTQ model from Python code
First, make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
```
pip install auto-gptq
```
Second, install tiktoken in order to use the tokenizer
```
pip install tiktoken
```
```
from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM
model_name_or_path = "SearchUnify-ML/xgen-7b-8k-open-instruct-gptq"
model_basename = "gptq_model-4bit-128g"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path,
use_fast=False,
trust_remote_code=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
model_basename=model_basename,
use_safetensors=False,
trust_remote_code=True,
device="cuda:0",
use_triton=use_triton)
# Note: check the prompt template is correct for this model.
prompt = "Explain the rules of field hockey to a novice."
prompt_template = f'''### Instruction: {prompt}
### Response:'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.3, max_new_tokens=512)
print(f"\n\n {tokenizer.decode(output[0]).split('### Response:')[1]}")
```
|
zohaib99k/Llama-2-13B-chat-8bit-GPTQ
|
zohaib99k
| 2023-08-03T05:41:06Z | 4 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"en",
"license:other",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-08-03T04:44:11Z |
---
inference: false
language:
- en
license: other
model_type: llama
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Meta's Llama 2 13B-chat GPTQ
These files are GPTQ model files for [Meta's Llama 2 13B-chat](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-13B-chat-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML)
* [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-13B-chat-hf)
## Prompt template: Llama-2-Chat
```
SYSTEM: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
USER: {prompt}
ASSISTANT:
```
## Provided files
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
| main | 4 | 128 | False | 7.26 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
| gptq-4bit-32g-actorder_True | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
| gptq-4bit-64g-actorder_True | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| gptq-4bit-128g-actorder_True | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| gptq-8bit-128g-actorder_True | 8 | 128 | True | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
| gptq-8bit-64g-actorder_True | 8 | 64 | True | 13.95 GB | False | AutoGPTQ | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
| gptq-8bit-128g-actorder_False | 8 | 128 | False | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
| gptq-8bit--1g-actorder_True | 8 | None | True | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Llama-2-13B-chat-GPTQ:gptq-4bit-32g-actorder_True`
- With Git, you can clone a branch with:
```
git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Llama-2-13B-chat-GPTQ`
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Llama-2-13B-chat-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Llama-2-13B-chat-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done"
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Llama-2-13B-chat-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
## How to use this GPTQ model from Python code
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
`GITHUB_ACTIONS=true pip install auto-gptq`
Then try the following example code:
```python
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
model_name_or_path = "TheBloke/Llama-2-13B-chat-GPTQ"
model_basename = "gptq_model-4bit-128g"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
model_basename=model_basename
use_safetensors=True,
trust_remote_code=True,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
"""
To download from a specific branch, use the revision parameter, as in this example:
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
revision="gptq-4bit-32g-actorder_True",
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=True,
device="cuda:0",
quantize_config=None)
"""
prompt = "Tell me about AI"
prompt_template=f'''SYSTEM: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
USER: {prompt}
ASSISTANT:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Compatibility
The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Meta's Llama 2 13B-chat
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
Ravnoor1/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
|
Ravnoor1
| 2023-08-03T05:33:15Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T05:33:13Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
EllaHong/news_exp3
|
EllaHong
| 2023-08-03T05:31:41Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T05:31:36Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
bharadwajkg/finetune-sd2-1-planogram-lora-nocrop-data7
|
bharadwajkg
| 2023-08-03T05:30:53Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-02T16:59:15Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - bharadwajkg/finetune-sd2-1-planogram-lora-nocrop-data7
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were fine-tuned on the bharadwajkg/planogram-sd-data7 dataset. You can find some example images in the following.




|
model-man/speecht5_finetuned_voxpopuli_hr
|
model-man
| 2023-08-03T05:09:01Z | 85 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-07-28T07:27:58Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_hr
results: []
pipeline_tag: text-to-speech
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_hr
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5085 | 5.93 | 500 | 0.4674 |
| 0.4864 | 11.87 | 1000 | 0.4538 |
| 0.4829 | 17.8 | 1500 | 0.4494 |
| 0.4785 | 23.74 | 2000 | 0.4477 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.1
- Tokenizers 0.13.3
|
DunnBC22/mit-b0-Image_segmentation-Carvana_Image_Masking
|
DunnBC22
| 2023-08-03T04:50:05Z | 0 | 1 | null |
[
"pytorch",
"tensorboard",
"generated_from_trainer",
"Image_Masking",
"image-segmentation",
"en",
"license:other",
"region:us"
] |
image-segmentation
| 2023-08-01T03:37:29Z |
---
license: other
tags:
- generated_from_trainer
- Image_Masking
model-index:
- name: mit-b0-Image_segmentation-Carvana_Image_Masking
results: []
language:
- en
metrics:
- mean_iou
pipeline_tag: image-segmentation
---
# mit-b0-Image_segmentation-Carvana_Image_Masking
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0).
It achieves the following results on the evaluation set:
- Loss: 0.0070
- Mean Iou: 0.9917
- Mean Accuracy: 0.9962
- Overall Accuracy: 0.9972
- Per Category Iou
- Segment 0: 0.9964996655500316
- Segment 1: 0.9868763925617403
- Per Category Accuracy
- Segment 0: 0.9980006976075766
- Segment 1: 0.994318466698934
## Model description
For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Computer%20Vision/Image%20Segmentation/Carvana%20Image%20Masking/Carvana%20Image%20Masking%20-%20Image%20Segmentation%20with%20LoRA.ipynb
## Intended uses & limitations
I used this to improve my skillset. I thank all of authors of the different technologies and dataset(s) for their contributions that have made this possible.
Please make sure to properly cite the authors of the different technologies and dataset(s) as they absolutely deserve credit for their contributions.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/ipythonx/carvana-image-masking-png
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Segment 0 Per Category Iou | Segment 1 Per Category Iou | Segment 0 Per Category Accuracy | Segment 1 Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-----------------:|:--------------------:|:-----------------:|:--------------------:|
| 0.0137 | 1.0 | 509 | 0.0113 | 0.9873 | 0.9942 | 0.9957 | 0.9946 | 0.9799 | 0.9969 | 0.9915 |
| 0.011 | 2.0 | 1018 | 0.0096 | 0.9889 | 0.9948 | 0.9963 | 0.9953 | 0.9826 | 0.9974 | 0.9922 |
| 0.0096 | 3.0 | 1527 | 0.0087 | 0.9899 | 0.9950 | 0.9966 | 0.9958 | 0.9841 | 0.9978 | 0.9922 |
| 0.0089 | 4.0 | 2036 | 0.0082 | 0.9904 | 0.9958 | 0.9968 | 0.9959 | 0.9848 | 0.9975 | 0.9941 |
| 0.0086 | 5.0 | 2545 | 0.0078 | 0.9907 | 0.9962 | 0.9969 | 0.9961 | 0.9853 | 0.9974 | 0.9951 |
| 0.0082 | 6.0 | 3054 | 0.0077 | 0.9908 | 0.9964 | 0.9969 | 0.9961 | 0.9855 | 0.9973 | 0.9956 |
| 0.0081 | 7.0 | 3563 | 0.0072 | 0.9914 | 0.9961 | 0.9971 | 0.9964 | 0.9864 | 0.9979 | 0.9944 |
| 0.0081 | 8.0 | 4072 | 0.0071 | 0.9915 | 0.9961 | 0.9972 | 0.9964 | 0.9866 | 0.9980 | 0.9942 |
| 0.0089 | 9.0 | 4581 | 0.0070 | 0.9916 | 0.9961 | 0.9972 | 0.9965 | 0.9868 | 0.9980 | 0.9941 |
| 0.0076 | 10.0 | 5090 | 0.0070 | 0.9917 | 0.9962 | 0.9972 | 0.9965 | 0.9869 | 0.9980 | 0.9943 |
* All values in the chart above are rounded to the nearest ten-thousandth.
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
chiuliwen/adapter_model
|
chiuliwen
| 2023-08-03T04:44:58Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T04:40:30Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
DavidLazer/llama2_finetuned_david
|
DavidLazer
| 2023-08-03T04:37:02Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"region:us"
] | null | 2023-08-03T04:28:56Z |
---
tags:
- generated_from_trainer
model-index:
- name: llama2_finetuned_david
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2_finetuned_david
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
JMINLEE/ppo-LunarLander-v2
|
JMINLEE
| 2023-08-03T04:35:39Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T04:35:17Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.33 +/- 17.97
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nguyenhuuanh/Test
|
nguyenhuuanh
| 2023-08-03T04:34:07Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-03T04:34:07Z |
---
license: creativeml-openrail-m
---
|
DUOMO-Lab/TransGPT-v0
|
DUOMO-Lab
| 2023-08-03T04:21:07Z | 67 | 11 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"chatglm",
"zh",
"Text2Text-Generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-19T01:27:19Z |
---
title: TransGPT-7b
emoji: 📚
colorFrom: gray
colorTo: red
language:
- zh
tags:
- chatglm
- pytorch
- zh
- Text2Text-Generation
license: "other"
widget:
- text: "我想了解如何申请和更新驾驶证?"
---
# TransGPT
**发布中文TransGPT(7B)模型**
test case:
|input_text|predict|
|:-- |:--- |
|我想了解如何申请和更新驾驶证?|你可以到当地的交通管理部门或者公安局办理相关手续。具体流程可以在官方网站上查询。|
# 文件校验
```
md5sum ./*
```
```
e618653f90f163928316858e95bd54d1 ./config.json
b1eb3650cbc84466fed263a9f0dff5e2 ./generation_config.json
570159d90b39554713e9702b9107928a ./pytorch_model-00001-of-00002.bin
8788671a726d25b192134909fb825e0b ./pytorch_model-00002-of-00002.bin
604e0ba32b2cb7df8d8a3d13bddc93fe ./pytorch_model.bin.index.json
413c7f9a8a6517c52c937eed27f18847 ./special_tokens_map.json
2ba2be903e87d7471bbc413e041e70e8 ./tokenizer_config.json
39afcc4541e7931ef0d561ac6e216586 ./tokenizer.model
```
## Usage
First, you pass your input through the transformer model, then you get the generated sentence.
Install package:
```
pip install sentencepiece
pip install transformers>=4.28.0
```
```python
import torch
import transformers
from transformers import LlamaTokenizer, LlamaForCausalLM
def generate_prompt(text):
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{text}
### Response:"""
checkpoint="DUOMO-Lab/TransGPT-v0"
tokenizer = LlamaTokenizer.from_pretrained(checkpoint)
model = LlamaForCausalLM.from_pretrained(checkpoint).half().cuda()
model.eval()
text = '我想了解如何申请和更新驾驶证?'
prompt = generate_prompt(text)
input_ids = tokenizer.encode(prompt, return_tensors='pt').to('cuda')
with torch.no_grad():
output_ids = model.generate(
input_ids=input_ids,
max_new_tokens=1024,
temperature=1,
top_k=20,
top_p=0.9,
repetition_penalty=1.15
).cuda()
output = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(output.replace(text, '').strip())
```
output:
```shell
我想了解如何申请和更新驾驶证?
```
## 模型来源
release合并后的模型权重。
HuggingFace版本权重(.bin文件)可用于:
- 使用Transformers进行训练和推理
- 使用text-generation-webui搭建界面
PyTorch版本权重(.pth文件)可用于:
- 使用llama.cpp工具进行量化和部署
模型文件组成:
```
TransGPT
config.json
generation_config.json
pytorch_model-00001-of-00002.bin
pytorch_model-00002-of-00002.bin
pytorch_model.bin.index.json
special_tokens_map.json
tokenizer.json
tokenizer.model
tokenizer_config.json
```
硬件要求:14G显存
### 微调数据集
1. ~34.6万条文本数据集(用于领域内预训练):[DUOMO-Lab/TransGPT-pt](https://huggingface.co/datasets/DUOMO-Lab/TransGPT-pt)
2. ~5.6万条对话数据(用于微调):[finetune_data](https://huggingface.co/data/finetune)
如果需要训练LLaMA模型,请参考[https://github.com/DUOMO/TransGPT](https://github.com/DUOMO/TransGPT)
## Citation
```latex
@software{TransGPT,
author = {Wang Peng},
title = {DUOMO/TransGPT},
year = {2023},
url = {https://github.com/DUOMO/TransGPT},
}
```
## Reference
- https://github.com/shibing624/textgen
|
Fazmin/MPT-7B-MacU-01X-Experimental
|
Fazmin
| 2023-08-03T04:06:03Z | 0 | 0 | null |
[
"text-generation",
"en",
"dataset:Anthropic/hh-rlhf",
"dataset:ehartford/dolphin",
"dataset:conceptofmind/t0_submix_original",
"dataset:conceptofmind/niv2_submix_original",
"region:us"
] |
text-generation
| 2023-07-12T02:46:24Z |
---
datasets:
- Anthropic/hh-rlhf
- ehartford/dolphin
- conceptofmind/t0_submix_original
- conceptofmind/niv2_submix_original
language:
- en
pipeline_tag: text-generation
---
# Mac Llama 13B
## Model Description
`Mac Llama 13B` is a Experimental Llama2 13B model finetuned on an Orca style Dataset
## Usage
Mac Llama 13B should be used with this prompt format:
```
### System:
This is a system prompt, please behave and help the user.
### User:
Your prompt here
### Assistant
The output of Stable Beluga 13B
```
## Model Details
* **Model type**: Mac Llama 13B is an auto-regressive language model fine-tuned on Llama2 13B.
* **Language(s)**: English
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
* **Contact**: For questions and comments about the model, please email `lm@stability.ai`
### Training Procedure
Models are learned via supervised fine-tuning on the aforementioned datasets, trained in mixed-precision (BF16), and optimized with AdamW. We outline the following hyperparameters:
| Dataset | Batch Size | Learning Rate |Learning Rate Decay| Warm-up | Weight Decay | Betas |
|-------------------|------------|---------------|-------------------|---------|--------------|-------------|
| Orca pt1 packed | 256 | 3e-5 | Cosine to 3e-6 | 100 | 1e-6 | (0.9, 0.95) |
## Ethical Considerations and Limitations
This is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, This models potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications using this, developers should perform safety testing and tuning tailored to their specific applications of the model.
|
saurabh2086/Taxi-v3
|
saurabh2086
| 2023-08-03T04:03:21Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T04:03:17Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="saurabh2086/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Holmodi/a2c-AntBulletEnv-v0
|
Holmodi
| 2023-08-03T04:01:41Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T04:00:37Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1108.89 +/- 245.89
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
wilson-wei/speecht5_tts-finetuned-voxpopuli
|
wilson-wei
| 2023-08-03T03:47:31Z | 81 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"nl",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-08-02T09:04:32Z |
---
language:
- nl
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
- text-to-speech
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5_tts-finetuned-voxpopuli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_tts-finetuned-voxpopuli
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5204 | 4.3 | 1000 | 0.4779 |
| 0.4952 | 8.61 | 2000 | 0.4651 |
| 0.4942 | 12.91 | 3000 | 0.4614 |
| 0.4928 | 17.21 | 4000 | 0.4575 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.0
- Tokenizers 0.13.3
|
thanhnew2001/vn-bloom-7b1
|
thanhnew2001
| 2023-08-03T03:43:44Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T03:42:26Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
SarwarShafee/BanglaBert_with_TFModel
|
SarwarShafee
| 2023-08-03T03:38:51Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-02T10:41:23Z |
# Test Code
```python
import tensorflow as tf
from transformers import TFAutoModelForPreTraining, AutoTokenizer
from normalizer import normalize
import numpy as np
model = TFAutoModelForPreTraining.from_pretrained("SarwarShafee/BanglaBert_with_TFModel", from_pt=True)
tokenizer = AutoTokenizer.from_pretrained("SarwarShafee/BanglaBert_with_TFModel")
original_sentence = "আমি কৃতজ্ঞ কারণ আপনি আমার জন্য অনেক কিছু করেছেন।"
fake_sentence = "আমি হতাশ কারণ আপনি আমার জন্য অনেক কিছু করেছেন।"
fake_sentence = normalize(fake_sentence) # this normalization step is required before tokenizing the text
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="tf")
discriminator_outputs = model(fake_inputs)[0]
predictions = tf.round((tf.sign(discriminator_outputs) + 1) / 2)
# Convert the predictions to a Python list and then to integers
predictions_list = predictions.numpy().squeeze().tolist()
integer_predictions = [int(prediction[0]) for prediction in predictions_list[1:-1]]
print(" ".join(fake_tokens))
print("-" * 50)
print(" ".join([str(prediction) for prediction in integer_predictions]))
print("-" * 50)
```
|
NasimB/bnc_spoken_gutenberg_fixed_log_rarity-mixed-seed
|
NasimB
| 2023-08-03T03:35:14Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-03T01:29:08Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: bnc_spoken_gutenberg_fixed_log_rarity-mixed-seed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bnc_spoken_gutenberg_fixed_log_rarity-mixed-seed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3573 | 0.29 | 500 | 5.3408 |
| 5.0516 | 0.59 | 1000 | 4.9319 |
| 4.7229 | 0.88 | 1500 | 4.7012 |
| 4.4572 | 1.17 | 2000 | 4.5696 |
| 4.3138 | 1.46 | 2500 | 4.4428 |
| 4.2149 | 1.76 | 3000 | 4.3505 |
| 4.0893 | 2.05 | 3500 | 4.2888 |
| 3.8993 | 2.34 | 4000 | 4.2377 |
| 3.8964 | 2.63 | 4500 | 4.1837 |
| 3.8463 | 2.93 | 5000 | 4.1298 |
| 3.6523 | 3.22 | 5500 | 4.1291 |
| 3.6059 | 3.51 | 6000 | 4.1041 |
| 3.584 | 3.8 | 6500 | 4.0725 |
| 3.4873 | 4.1 | 7000 | 4.0728 |
| 3.334 | 4.39 | 7500 | 4.0694 |
| 3.3297 | 4.68 | 8000 | 4.0589 |
| 3.3185 | 4.97 | 8500 | 4.0454 |
| 3.1661 | 5.27 | 9000 | 4.0614 |
| 3.1518 | 5.56 | 9500 | 4.0607 |
| 3.1478 | 5.85 | 10000 | 4.0586 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
neuralsentry/vulnfixClassification-DistilBERT-DCMB
|
neuralsentry
| 2023-08-03T03:31:48Z | 103 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:neuralsentry/distilbert-git-commits-mlm",
"base_model:finetune:neuralsentry/distilbert-git-commits-mlm",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-03T03:19:17Z |
---
license: apache-2.0
base_model: neuralsentry/distilbert-git-commits-mlm
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: vulnfixClassification-DistilBERT-DCMB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vulnfixClassification-DistilBERT-DCMB
This model is a fine-tuned version of [neuralsentry/distilbert-git-commits-mlm](https://huggingface.co/neuralsentry/distilbert-git-commits-mlm) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1769
- Accuracy: 0.9713
- Precision: 0.9778
- Recall: 0.9667
- F1: 0.9722
- Roc Auc: 0.9715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 420
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.2594 | 1.0 | 110 | 0.1452 | 0.9520 | 0.9672 | 0.9395 | 0.9532 | 0.9525 |
| 0.0966 | 2.0 | 220 | 0.1103 | 0.9644 | 0.9714 | 0.9599 | 0.9656 | 0.9646 |
| 0.0499 | 3.0 | 330 | 0.1193 | 0.9640 | 0.9679 | 0.9626 | 0.9653 | 0.9641 |
| 0.0251 | 4.0 | 440 | 0.1289 | 0.9623 | 0.9577 | 0.9703 | 0.9640 | 0.9619 |
| 0.0132 | 5.0 | 550 | 0.1495 | 0.9660 | 0.9660 | 0.9687 | 0.9673 | 0.9659 |
| 0.0086 | 6.0 | 660 | 0.1759 | 0.9684 | 0.9830 | 0.9558 | 0.9692 | 0.9689 |
| 0.0054 | 7.0 | 770 | 0.1568 | 0.9700 | 0.9788 | 0.9632 | 0.9709 | 0.9703 |
| 0.0023 | 8.0 | 880 | 0.1775 | 0.9707 | 0.9754 | 0.9681 | 0.9717 | 0.9708 |
| 0.0023 | 9.0 | 990 | 0.1752 | 0.9710 | 0.9794 | 0.9646 | 0.9719 | 0.9713 |
| 0.0011 | 10.0 | 1100 | 0.1769 | 0.9713 | 0.9778 | 0.9667 | 0.9722 | 0.9715 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
chriskim2273/IOTNation_CompanyName_AND_Location_AND_Series_Extraction_QA_Model_1.7_DistilBert_DIFFERENT_UNK_3
|
chriskim2273
| 2023-08-03T03:19:31Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-03T03:08:26Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: IOTNation_CompanyName_AND_Location_AND_Series_Extraction_QA_Model_1.7_DistilBert_DIFFERENT_UNK_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IOTNation_CompanyName_AND_Location_AND_Series_Extraction_QA_Model_1.7_DistilBert_DIFFERENT_UNK_3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0682
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
EventHorizonAI/llama2-qlora-finetunined-french
|
EventHorizonAI
| 2023-08-03T03:19:18Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T03:19:10Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
obong/xlm-roberta-base-finetuned-panx-en
|
obong
| 2023-08-03T03:10:51Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-03T02:51:09Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.en
split: validation
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.683008356545961
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4019
- F1: 0.6830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1318 | 1.0 | 50 | 0.6082 | 0.5386 |
| 0.5111 | 2.0 | 100 | 0.4409 | 0.6474 |
| 0.3597 | 3.0 | 150 | 0.4019 | 0.6830 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
wuru330/378A1_results_coord
|
wuru330
| 2023-08-03T03:01:03Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-03T02:15:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 378A1_results_coord
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 378A1_results_coord
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4924
- Accuracy: 0.8946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2529 | 1.0 | 37 | 1.0701 | 0.6207 |
| 0.6771 | 2.0 | 74 | 0.6678 | 0.7687 |
| 0.4363 | 3.0 | 111 | 0.5622 | 0.8010 |
| 0.2884 | 4.0 | 148 | 0.3808 | 0.8690 |
| 0.2382 | 5.0 | 185 | 0.3492 | 0.8810 |
| 0.1213 | 6.0 | 222 | 0.3485 | 0.8895 |
| 0.1238 | 7.0 | 259 | 0.4012 | 0.8827 |
| 0.0878 | 8.0 | 296 | 0.4311 | 0.8639 |
| 0.0839 | 9.0 | 333 | 0.4417 | 0.8656 |
| 0.0406 | 10.0 | 370 | 0.3993 | 0.8844 |
| 0.0509 | 11.0 | 407 | 0.4922 | 0.8690 |
| 0.0347 | 12.0 | 444 | 0.4840 | 0.8741 |
| 0.033 | 13.0 | 481 | 0.4572 | 0.8827 |
| 0.0222 | 14.0 | 518 | 0.4376 | 0.8861 |
| 0.0197 | 15.0 | 555 | 0.4397 | 0.8912 |
| 0.0179 | 16.0 | 592 | 0.4464 | 0.8946 |
| 0.0167 | 17.0 | 629 | 0.4526 | 0.8946 |
| 0.0154 | 18.0 | 666 | 0.4588 | 0.8929 |
| 0.0148 | 19.0 | 703 | 0.4642 | 0.8929 |
| 0.0135 | 20.0 | 740 | 0.4691 | 0.8929 |
| 0.0131 | 21.0 | 777 | 0.4732 | 0.8946 |
| 0.0125 | 22.0 | 814 | 0.4776 | 0.8946 |
| 0.0119 | 23.0 | 851 | 0.4809 | 0.8946 |
| 0.0116 | 24.0 | 888 | 0.4841 | 0.8946 |
| 0.0112 | 25.0 | 925 | 0.4863 | 0.8946 |
| 0.0111 | 26.0 | 962 | 0.4885 | 0.8946 |
| 0.0108 | 27.0 | 999 | 0.4903 | 0.8946 |
| 0.0108 | 28.0 | 1036 | 0.4912 | 0.8946 |
| 0.0105 | 29.0 | 1073 | 0.4921 | 0.8946 |
| 0.0108 | 30.0 | 1110 | 0.4924 | 0.8946 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Austism/chronos-hermes-13b-v2-GPTQ
|
Austism
| 2023-08-03T03:00:23Z | 13 | 15 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"llama-2",
"pytorch",
"chatbot",
"storywriting",
"generalist-model",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-03T00:31:31Z |
---
license: other
tags:
- llama
- llama-2
- pytorch
- chatbot
- storywriting
- generalist-model
---
# chronos-hermes-13b-v2-GPTQ
([chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2) + [Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)) 75/25 merge
4bit (int4) 128g quantization
- [FP16 HF Weights](https://huggingface.co/Austism/chronos-hermes-13b-v2)
## Prompt Format
```
### Instruction:
<prompt>
### Response:
```
This is an adaption of [chronos-hermes-13b](https://huggingface.co/Austism/chronos-hermes-13b) for llama-2.
|
obong/xlm-roberta-base-finetuned-panx-it
|
obong
| 2023-08-03T02:51:01Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-03T02:31:28Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.it
split: validation
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8118081180811809
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2476
- F1: 0.8118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8354 | 1.0 | 70 | 0.3222 | 0.7418 |
| 0.3102 | 2.0 | 140 | 0.2883 | 0.7619 |
| 0.2079 | 3.0 | 210 | 0.2476 | 0.8118 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
RUCAIBox/Erya
|
RUCAIBox
| 2023-08-03T02:41:13Z | 124 | 8 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"translation",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-21T06:09:18Z |
---
license: apache-2.0
pipeline_tag: translation
language:
- zh
---
# Model Description
Erya is a pretrained model specifically designed for translating Ancient Chinese into Modern Chinese. It utilizes an Encoder-Decoder architecture and has been trained using a combination of DMLM (Dual Masked Language Model) and DAS (Disyllabic Aligned Substitution) techniques on datasets comprising both Ancient Chinese and Modern Chinese texts. The detailed information of our work can be found here: [RUCAIBox/Erya (github.com)](https://github.com/RUCAIBox/Erya)
More information about Erya dataset can be found here: [RUCAIBox/Erya-dataset · Datasets at Hugging Face](https://huggingface.co/datasets/RUCAIBox/Erya-dataset), which can be used to tune the Erya model further for a better translation performance.
# Example
```python
>>> from transformers import BertTokenizer, CPTForConditionalGeneration
>>> tokenizer = BertTokenizer.from_pretrained("RUCAIBox/Erya")
>>> model = CPTForConditionalGeneration.from_pretrained("RUCAIBox/Erya")
>>> input_ids = tokenizer("安世字子孺,少以父任为郎。", return_tensors='pt')
>>> input_ids.pop("token_type_ids")
>>> pred_ids = model.generate(max_new_tokens=256, **input_ids)
>>> print(tokenizer.batch_decode(pred_ids, skip_special_tokens=True))
['安 世 字 子 孺 , 年 轻 时 因 父 任 郎 官 。']
```
|
RUCAIBox/Erya4FT
|
RUCAIBox
| 2023-08-03T02:39:19Z | 125 | 4 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"translation",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-29T12:15:15Z |
---
license: apache-2.0
language:
- zh
metrics:
- bleu
pipeline_tag: translation
widget:
- text: "竖子不足与谋。"
example_title: "translation"
---
# Model Description
Erya4FT is based on [Erya](https://huggingface.co/RUCAIBox/Erya) and further fine-tuned on our [Dataset](https://huggingface.co/datasets/RUCAIBox/Erya-dataset), enhancing the ability to translate ancient Chinese into Modern Chinese.
# Example
```python
>>> from transformers import BertTokenizer, CPTForConditionalGeneration
>>> tokenizer = BertTokenizer.from_pretrained("RUCAIBox/Erya4FT")
>>> model = CPTForConditionalGeneration.from_pretrained("RUCAIBox/Erya4FT")
>>> input_ids = tokenizer("竖子不足与谋。", return_tensors='pt')
>>> input_ids.pop("token_type_ids")
>>> pred_ids = model.generate(max_new_tokens=256, **input_ids)
>>> print(tokenizer.batch_decode(pred_ids, skip_special_tokens=True))
['这 小 子 不 值 得 与 他 商 量 。']
```
|
chriskim2273/IOTNation_CompanyName_AND_Location_AND_Series_Extraction_QA_Model_1.7_DistilBert_DIFFERENT_UNK_2
|
chriskim2273
| 2023-08-03T02:37:17Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-03T02:26:11Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: IOTNation_CompanyName_AND_Location_AND_Series_Extraction_QA_Model_1.7_DistilBert_DIFFERENT_UNK_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IOTNation_CompanyName_AND_Location_AND_Series_Extraction_QA_Model_1.7_DistilBert_DIFFERENT_UNK_2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
cbalaji/Llama2-13b-chat-hf-sconnectpostgen
|
cbalaji
| 2023-08-03T02:22:14Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T02:22:13Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
obong/xlm-roberta-base-finetuned-panx-de-fr
|
obong
| 2023-08-03T02:09:36Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-03T01:45:54Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1608
- F1: 0.8609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2907 | 1.0 | 715 | 0.1899 | 0.8204 |
| 0.1477 | 2.0 | 1430 | 0.1578 | 0.8509 |
| 0.0934 | 3.0 | 2145 | 0.1608 | 0.8609 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
CobraMamba/mamba-gpt-3b-v3
|
CobraMamba
| 2023-08-03T01:55:05Z | 1,403 | 18 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"gpt",
"llm",
"large language model",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-28T07:45:24Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
inference: false
thumbnail: >-
https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
license: apache-2.0
---
# Model Card
**The Best 3B Model! Surpassing dolly-v2-12b**
The best 3B model on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), with performance surpassing dolly-v2-12b
| Metric | Value |
|-----------------------|-------|
| MMLU (5-shot) | 27.3 |
| ARC (25-shot) | 41.7 |
| HellaSwag (10-shot) | 71.1 |
| TruthfulQA (0-shot) | 37.9 |
| Avg. | 44.5 |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above.
The training code and data will be open sourced later on Github(https://github.com/chi2liu/mamba-gpt-3b)
## Training Dataset
` mamba-gpt-3b-v3 ` is trained on multiply dataset:
- [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca)
- [Open Assistant (multilingual)](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [LIMA (en)](https://huggingface.co/datasets/GAIR/lima)
- [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)
## Summary
We have fine-tuned the open-lama model and surpassed the original model in multiple evaluation subtasks, making it currently the best performing 3B model with comparable performance to llama-7b
- Base model: [openlm-research/open_llama_3b_v2](https://huggingface.co/openlm-research/open_llama_3b_v2)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.29.2
pip install accelerate==0.19.0
pip install torch==2.0.0
```
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("CobraMamba/mamba-gpt-3b-v3")
model = AutoModelForCausalLM.from_pretrained("CobraMamba/mamba-gpt-3b-v3", trust_remote_code=True, torch_dtype=torch.float16)
input_context = "Your text here"
input_ids = tokenizer.encode(input_context, return_tensors="pt")
output = model.generate(input_ids, max_length=128, temperature=0.7)
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(output_text)
```
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=4096, bias=False)
(v_proj): Linear(in_features=4096, out_features=4096, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=4096, out_features=11008, bias=False)
(down_proj): Linear(in_features=11008, out_features=4096, bias=False)
(up_proj): Linear(in_features=4096, out_features=11008, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```
## Citation
If this work is helpful, please kindly cite as:
```bibtex
@Misc{mamba-gpt-3b-v3,
title = {Mamba-GPT-3b-v3},
author = {chiliu},
howpublished = {\url{https://huggingface.co/CobraMamba/mamba-gpt-3b-v3}},
year = {2023}
}
```
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
JabrilJacobs/Reinforce-CartPole-v1
|
JabrilJacobs
| 2023-08-03T01:50:00Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T01:49:51Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
tomoohive/ppo-SnowballTarget
|
tomoohive
| 2023-08-03T01:44:06Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-08-03T01:44:00Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: tomoohive/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
dariowsz/a2c-AntBulletEnv-v0
|
dariowsz
| 2023-08-03T01:37:50Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T01:36:34Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1065.75 +/- 137.29
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
arpan-das-astrophysics/Pixelcopter-PLE-v0
|
arpan-das-astrophysics
| 2023-08-03T01:22:30Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T01:22:24Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 44.60 +/- 32.54
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.