modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 00:41:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 00:40:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
PhongLe1311/my_awesome_billsum_model
|
PhongLe1311
| 2023-06-25T15:30:09Z | 99 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T15:20:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1408
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5181
- Rouge1: 0.1408
- Rouge2: 0.0514
- Rougel: 0.1173
- Rougelsum: 0.1173
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8150 | 0.1264 | 0.0373 | 0.1061 | 0.1061 | 19.0 |
| No log | 2.0 | 124 | 2.5989 | 0.1379 | 0.0501 | 0.1164 | 0.1165 | 19.0 |
| No log | 3.0 | 186 | 2.5349 | 0.1396 | 0.0525 | 0.1179 | 0.1181 | 19.0 |
| No log | 4.0 | 248 | 2.5181 | 0.1408 | 0.0514 | 0.1173 | 0.1173 | 19.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ahessamb/bertopic-test
|
ahessamb
| 2023-06-25T15:29:15Z | 3 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2023-06-25T15:29:09Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# bertopic-test
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("ahessamb/bertopic-test")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 50
* Number of training documents: 1570
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| 0 | liquidations - forcefully - betting - liquidation - contracts | 8 | 0_liquidations_forcefully_betting_liquidation |
| 1 | litecoin - wsm - presale - 77 - near | 94 | 1_litecoin_wsm_presale_77 |
| 2 | sec - court - terraform - dismiss - lawyers | 49 | 2_sec_court_terraform_dismiss |
| 3 | huobi - hkvac - bsl - web3 - code | 12 | 3_huobi_hkvac_bsl_web3 |
| 4 | lucie - shiba - susbarium - puppynet - portals | 3 | 4_lucie_shiba_susbarium_puppynet |
| 5 | 000006819 - shiba - accuracy - finbold - estimates | 27 | 5_000006819_shiba_accuracy_finbold |
| 6 | tokens - sec - binance - securities - coinbase | 45 | 6_tokens_sec_binance_securities |
| 7 | mckinsey - ai - nanjing - productivity - diffusion | 43 | 7_mckinsey_ai_nanjing_productivity |
| 8 | resistance - swing - fib - zone - ltc | 32 | 8_resistance_swing_fib_zone |
| 9 | brinkman - tategpt - bitcoin - artists - wealth | 26 | 9_brinkman_tategpt_bitcoin_artists |
| 10 | stablecoin - stablecoins - decline - redemptions - tusd | 2 | 10_stablecoin_stablecoins_decline_redemptions |
| 11 | mutant - mayc - bayc - club - mcmullen | 64 | 11_mutant_mayc_bayc_club |
| 12 | xrp - ema - ripple - bullish - cryptocurrencies | 43 | 12_xrp_ema_ripple_bullish |
| 13 | tether - cbdcs - loans - federal - nafcu | 27 | 13_tether_cbdcs_loans_federal |
| 14 | rate - tradingview - bnb - breakout - coinmarketcap | 85 | 14_rate_tradingview_bnb_breakout |
| 15 | 26 - bulls - rsi - ceiling - 300 | 2 | 15_26_bulls_rsi_ceiling |
| 16 | lowest - jump - week - wallet - staggering | 3 | 16_lowest_jump_week_wallet |
| 17 | xrp - ripple - mekras - sbi - institutions | 56 | 17_xrp_ripple_mekras_sbi |
| 18 | debt - mortgages - trillion - government - suspends | 3 | 18_debt_mortgages_trillion_government |
| 19 | longitude - chronometer - bitcoin - ships - graffiti | 2 | 19_longitude_chronometer_bitcoin_ships |
| 20 | volumes - piggy - aud - xrp - usdt | 15 | 20_volumes_piggy_aud_xrp |
| 21 | root - ledger - stakers - sidechains - compatibility | 4 | 21_root_ledger_stakers_sidechains |
| 22 | astra - letter - concerns - investors - bitwise | 4 | 22_astra_letter_concerns_investors |
| 23 | gold - governments - manipulated - stocks - mined | 10 | 23_gold_governments_manipulated_stocks |
| 24 | tether - sygnum - documents - bank - coindesk | 9 | 24_tether_sygnum_documents_bank |
| 25 | rewards - governance - lido - proposal - june | 45 | 25_rewards_governance_lido_proposal |
| 26 | listings - coin - fairerc20 - bittrex - withdrawals | 68 | 26_listings_coin_fairerc20_bittrex |
| 27 | peaq - ordibots - cosmos - fetch - machine | 81 | 27_peaq_ordibots_cosmos_fetch |
| 28 | uniswap - v4 - orders - hooks - differing | 23 | 28_uniswap_v4_orders_hooks |
| 29 | price - neo - matic - rise - altcoin | 92 | 29_price_neo_matic_rise |
| 30 | emptydoc - staff - policy - binance - workspaces | 2 | 30_emptydoc_staff_policy_binance |
| 31 | lunc - synthetix - terra - perps - staking | 33 | 31_lunc_synthetix_terra_perps |
| 32 | tweet - dogecoin - chart - meme - negative | 3 | 32_tweet_dogecoin_chart_meme |
| 33 | binance - securities - exchange - cz - regulators | 63 | 33_binance_securities_exchange_cz |
| 34 | bitmart - sale - xrp - discount - event | 4 | 34_bitmart_sale_xrp_discount |
| 35 | yuan - event - olympics - canadians - organizers | 49 | 35_yuan_event_olympics_canadians |
| 36 | gusd - fidelity - bitcoin - proposal - blackrock | 52 | 36_gusd_fidelity_bitcoin_proposal |
| 37 | bills - mcglone - markets - stablecoins - liquidity | 56 | 37_bills_mcglone_markets_stablecoins |
| 38 | asset - gain - drop - trading - hours | 2 | 38_asset_gain_drop_trading |
| 39 | epstein - hamsterwheel - vulnerability - bounty - certick | 28 | 39_epstein_hamsterwheel_vulnerability_bounty |
| 40 | pyth - transparency - data - terra - oracle | 19 | 40_pyth_transparency_data_terra |
| 41 | shiba - inu - weighted - collapse - recovery | 2 | 41_shiba_inu_weighted_collapse |
| 42 | neo - opensea - carey - security - impersonators | 24 | 42_neo_opensea_carey_security |
| 43 | balancer - zkevm - liquidity - defi - 8020 | 3 | 43_balancer_zkevm_liquidity_defi |
| 44 | reed - battle - platform - argument - trading | 22 | 44_reed_battle_platform_argument |
| 45 | ada - cardano - whale - sell - investors | 4 | 45_ada_cardano_whale_sell |
| 46 | uk - coinbase - hong - crypto - regulatory | 65 | 46_uk_coinbase_hong_crypto |
| 47 | ethereum - tvl - defi - arbitrum - airdrop | 54 | 47_ethereum_tvl_defi_arbitrum |
| 48 | swyftx - shibarium - token - shibaswap - shiba | 54 | 48_swyftx_shibarium_token_shibaswap |
| 49 | bitcoin - mining - gain - miners - difficulty | 54 | 49_bitcoin_mining_gain_miners |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.29
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.30.2
* Numba: 0.56.4
* Plotly: 5.13.1
* Python: 3.10.12
|
SwampMan/ppo-Huggy
|
SwampMan
| 2023-06-25T15:20:32Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-25T15:20:22Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: SwampMan/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
cagmfr/q-FrozenLake-v1-4x4-noSlippery
|
cagmfr
| 2023-06-25T15:20:16Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T15:20:14Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="cagmfr/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NasimB/gpt2-2-dp-mod-aochild-cut
|
NasimB
| 2023-06-25T15:09:04Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T07:34:36Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-2-dp-mod-aochild-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-2-dp-mod-aochild-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7147 | 0.27 | 500 | 5.6451 |
| 5.3609 | 0.54 | 1000 | 5.2108 |
| 5.0162 | 0.81 | 1500 | 4.9585 |
| 4.7627 | 1.08 | 2000 | 4.8126 |
| 4.5775 | 1.35 | 2500 | 4.7013 |
| 4.4856 | 1.62 | 3000 | 4.6034 |
| 4.4038 | 1.89 | 3500 | 4.5175 |
| 4.2252 | 2.16 | 4000 | 4.4775 |
| 4.1408 | 2.42 | 4500 | 4.4236 |
| 4.1136 | 2.69 | 5000 | 4.3721 |
| 4.0852 | 2.96 | 5500 | 4.3281 |
| 3.87 | 3.23 | 6000 | 4.3418 |
| 3.8651 | 3.5 | 6500 | 4.3062 |
| 3.8601 | 3.77 | 7000 | 4.2781 |
| 3.8091 | 4.04 | 7500 | 4.2785 |
| 3.5972 | 4.31 | 8000 | 4.2888 |
| 3.6301 | 4.58 | 8500 | 4.2678 |
| 3.6398 | 4.85 | 9000 | 4.2396 |
| 3.4906 | 5.12 | 9500 | 4.2803 |
| 3.3704 | 5.39 | 10000 | 4.2849 |
| 3.4008 | 5.66 | 10500 | 4.2718 |
| 3.4029 | 5.93 | 11000 | 4.2491 |
| 3.1804 | 6.2 | 11500 | 4.3116 |
| 3.1361 | 6.47 | 12000 | 4.3119 |
| 3.1532 | 6.73 | 12500 | 4.3067 |
| 3.1591 | 7.0 | 13000 | 4.3072 |
| 2.8974 | 7.27 | 13500 | 4.3563 |
| 2.9167 | 7.54 | 14000 | 4.3589 |
| 2.9248 | 7.81 | 14500 | 4.3580 |
| 2.8683 | 8.08 | 15000 | 4.3791 |
| 2.741 | 8.35 | 15500 | 4.3939 |
| 2.7503 | 8.62 | 16000 | 4.3968 |
| 2.7573 | 8.89 | 16500 | 4.3983 |
| 2.6961 | 9.16 | 17000 | 4.4075 |
| 2.6562 | 9.43 | 17500 | 4.4101 |
| 2.6653 | 9.7 | 18000 | 4.4107 |
| 2.667 | 9.97 | 18500 | 4.4109 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Smaraa/t5-text-simplification_1e4_adafactor_biendata
|
Smaraa
| 2023-06-25T15:07:10Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T12:37:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-text-simplification_1e4_adafactor_biendata
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-text-simplification_1e4_adafactor_biendata
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7562
- Rouge1: 10.4603
- Rouge2: 2.642
- Rougel: 9.6362
- Rougelsum: 9.6589
- Gen Len: 13.2838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 464 | 0.5489 | 29.7693 | 11.1997 | 25.6091 | 25.5979 | 14.7281 |
| 0.9314 | 2.0 | 928 | 0.5392 | 29.9099 | 10.9645 | 25.334 | 25.3259 | 14.7188 |
| 0.5594 | 3.0 | 1392 | 0.5342 | 30.3194 | 11.4204 | 25.8248 | 25.8255 | 14.7666 |
| 0.5333 | 4.0 | 1856 | 0.5376 | 30.8368 | 11.6152 | 26.3172 | 26.3583 | 14.1578 |
| 0.5192 | 5.0 | 2320 | 0.8890 | 7.5517 | 1.4313 | 7.0971 | 7.1064 | 9.9191 |
| 0.8897 | 6.0 | 2784 | 0.8252 | 6.9283 | 1.3484 | 6.5916 | 6.5877 | 10.9894 |
| 0.9385 | 7.0 | 3248 | 0.7971 | 8.2401 | 1.9957 | 7.7693 | 7.7675 | 10.7732 |
| 0.9089 | 8.0 | 3712 | 0.7725 | 9.7559 | 2.2249 | 9.0272 | 9.0098 | 10.7175 |
| 0.8824 | 9.0 | 4176 | 0.7552 | 12.006 | 2.8041 | 11.0115 | 10.992 | 10.7838 |
| 0.8658 | 10.0 | 4640 | 0.7490 | 13.311 | 3.4159 | 12.1933 | 12.1551 | 10.6499 |
| 0.864 | 11.0 | 5104 | 0.7448 | 13.9983 | 3.6176 | 12.7712 | 12.7347 | 10.752 |
| 0.868 | 12.0 | 5568 | 0.7495 | 12.318 | 3.2975 | 11.3451 | 11.3218 | 12.0252 |
| 0.8844 | 13.0 | 6032 | 0.7552 | 10.6154 | 2.7347 | 9.8228 | 9.8116 | 13.191 |
| 0.8844 | 14.0 | 6496 | 0.7562 | 10.4603 | 2.642 | 9.6362 | 9.6589 | 13.2838 |
| 0.8971 | 15.0 | 6960 | 0.7562 | 10.4603 | 2.642 | 9.6362 | 9.6589 | 13.2838 |
| 0.8981 | 16.0 | 7424 | 0.7562 | 10.4603 | 2.642 | 9.6362 | 9.6589 | 13.2838 |
| 0.8956 | 17.0 | 7888 | 0.7562 | 10.4603 | 2.642 | 9.6362 | 9.6589 | 13.2838 |
| 0.8984 | 18.0 | 8352 | 0.7562 | 10.4603 | 2.642 | 9.6362 | 9.6589 | 13.2838 |
| 0.8959 | 19.0 | 8816 | 0.7562 | 10.4603 | 2.642 | 9.6362 | 9.6589 | 13.2838 |
| 0.8977 | 20.0 | 9280 | 0.7562 | 10.4603 | 2.642 | 9.6362 | 9.6589 | 13.2838 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Smaraa/gpt2-text-simplification_1e4_adafactor_biendata
|
Smaraa
| 2023-06-25T14:56:13Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T12:42:47Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-text-simplification_1e4_adafactor_biendata
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-text-simplification_1e4_adafactor_biendata
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9089
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 464 | 0.7729 |
| 1.0489 | 2.0 | 928 | 0.7546 |
| 0.754 | 3.0 | 1392 | 0.7497 |
| 0.7034 | 4.0 | 1856 | 0.7530 |
| 0.6619 | 5.0 | 2320 | 0.7560 |
| 0.6265 | 6.0 | 2784 | 0.7639 |
| 0.5921 | 7.0 | 3248 | 0.7747 |
| 0.5621 | 8.0 | 3712 | 0.7848 |
| 0.5359 | 9.0 | 4176 | 0.7969 |
| 0.5115 | 10.0 | 4640 | 0.8113 |
| 0.4879 | 11.0 | 5104 | 0.8256 |
| 0.4683 | 12.0 | 5568 | 0.8373 |
| 0.4491 | 13.0 | 6032 | 0.8519 |
| 0.4491 | 14.0 | 6496 | 0.8642 |
| 0.4324 | 15.0 | 6960 | 0.8741 |
| 0.4176 | 16.0 | 7424 | 0.8841 |
| 0.4054 | 17.0 | 7888 | 0.8924 |
| 0.3946 | 18.0 | 8352 | 0.8994 |
| 0.3868 | 19.0 | 8816 | 0.9043 |
| 0.3813 | 20.0 | 9280 | 0.9089 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sukantan/all-MiniLM-L6-v2-ftlegal-v1
|
sukantan
| 2023-06-25T14:44:18Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"dataset:sukantan/nyaya-st-training",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-25T14:44:14Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
datasets:
- sukantan/nyaya-st-training
---
# sukantan/all-MiniLM-L6-v2-ftlegal-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sukantan/all-MiniLM-L6-v2-ftlegal-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sukantan/all-MiniLM-L6-v2-ftlegal-v1)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 391 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MegaBatchMarginLoss.MegaBatchMarginLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 391,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
wza/llama-65b-qlora-fin-1epoch
|
wza
| 2023-06-25T14:09:07Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-23T01:52:13Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
wza/llama-65b-qlora-fin-2epoch
|
wza
| 2023-06-25T14:04:59Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-25T12:56:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
Smaraa/bart-text-simplification_1e4_adafactor_biendata
|
Smaraa
| 2023-06-25T14:04:43Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T12:33:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-text-simplification_1e4_adafactor_biendata
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-text-simplification_1e4_adafactor_biendata
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7599
- Rouge1: 29.7176
- Rouge2: 10.9512
- Rougel: 25.5101
- Rougelsum: 25.526
- Gen Len: 15.2029
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 232 | 0.5813 | 30.604 | 12.4253 | 26.5172 | 26.4807 | 15.2241 |
| No log | 2.0 | 464 | 0.5739 | 31.9076 | 12.798 | 27.4728 | 27.4929 | 15.2241 |
| 1.0176 | 3.0 | 696 | 0.5700 | 31.3776 | 12.2852 | 27.1116 | 27.0878 | 15.6459 |
| 1.0176 | 4.0 | 928 | 0.5762 | 30.8731 | 12.3014 | 26.9196 | 26.8301 | 14.6353 |
| 0.4798 | 5.0 | 1160 | 0.5863 | 29.927 | 11.7166 | 25.9447 | 25.921 | 14.4297 |
| 0.4798 | 6.0 | 1392 | 0.6003 | 29.9528 | 11.2098 | 25.6908 | 25.7209 | 14.7414 |
| 0.3855 | 7.0 | 1624 | 0.6179 | 30.1161 | 11.2863 | 26.1433 | 26.1519 | 15.1698 |
| 0.3855 | 8.0 | 1856 | 0.6290 | 29.5566 | 11.1229 | 25.6003 | 25.5754 | 14.87 |
| 0.3092 | 9.0 | 2088 | 0.6538 | 29.7844 | 11.2434 | 25.8222 | 25.8067 | 14.9708 |
| 0.3092 | 10.0 | 2320 | 0.6698 | 28.9941 | 10.6603 | 25.0054 | 25.0198 | 15.0239 |
| 0.247 | 11.0 | 2552 | 0.6906 | 28.732 | 10.4525 | 24.8897 | 24.8953 | 14.9721 |
| 0.247 | 12.0 | 2784 | 0.7023 | 29.0609 | 10.4762 | 24.9678 | 24.9893 | 15.317 |
| 0.198 | 13.0 | 3016 | 0.7200 | 29.9516 | 11.2397 | 25.7347 | 25.7489 | 15.1485 |
| 0.198 | 14.0 | 3248 | 0.7263 | 29.1565 | 10.7363 | 25.2238 | 25.203 | 14.9761 |
| 0.198 | 15.0 | 3480 | 0.7376 | 30.0068 | 11.2078 | 26.0012 | 26.0235 | 14.9589 |
| 0.1602 | 16.0 | 3712 | 0.7489 | 29.8747 | 11.0555 | 25.7321 | 25.7543 | 15.2931 |
| 0.1602 | 17.0 | 3944 | 0.7487 | 29.6901 | 10.8692 | 25.5467 | 25.5808 | 15.2798 |
| 0.1342 | 18.0 | 4176 | 0.7553 | 29.5496 | 10.8611 | 25.2895 | 25.3218 | 15.3156 |
| 0.1342 | 19.0 | 4408 | 0.7590 | 29.7733 | 11.1577 | 25.671 | 25.6883 | 15.1313 |
| 0.1184 | 20.0 | 4640 | 0.7599 | 29.7176 | 10.9512 | 25.5101 | 25.526 | 15.2029 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
rdenadai/BR_BERTo
|
rdenadai
| 2023-06-25T14:02:18Z | 180 | 3 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"roberta",
"fill-mask",
"portuguese",
"brazil",
"pt_BR",
"pt",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: pt
tags:
- portuguese
- brazil
- pt_BR
widget:
- text: gostei muito dessa <mask>
---
# BR_BERTo
Portuguese (Brazil) model for text inference.
## Params
Trained on a corpus of 6_993_330 sentences.
- Vocab size: 150_000
- RobertaForMaskedLM size : 512
- Num train epochs: 3
- Time to train: ~10days (on GCP with a Nvidia T4)
I follow the great tutorial from HuggingFace team:
[How to train a new language model from scratch using Transformers and Tokenizers](https://huggingface.co/blog/how-to-train)
More infor here:
[BR_BERTo](https://github.com/rdenadai/BR-BERTo)
|
anas21/English1SpeechToTextModel
|
anas21
| 2023-06-25T13:48:32Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-25T13:34:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: English1SpeechToTextModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# English1SpeechToTextModel
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 10
- eval_batch_size: 8
- seed: 10
- gradient_accumulation_steps: 10
- total_train_batch_size: 100
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
findnitai/FaceGen
|
findnitai
| 2023-06-25T13:25:03Z | 138 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-24T03:47:05Z |
---
license: apache-2.0
pipeline_tag: text-to-image
---
Few examples of unique faces generated by the model. Trained on FFHQ dataset.

|
lucasbertola/q-FrozenLake-v1-8x8-noSlipper
|
lucasbertola
| 2023-06-25T13:23:29Z | 0 | 0 | null |
[
"FrozenLake-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"Lucas_is_the_best",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T13:18:21Z |
---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
- Lucas_is_the_best
model-index:
- name: q-FrozenLake-v1-8x8-noSlipper
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
---
# **Q-Learning** Agent playing1
This is a trained model of a **Q-Learning** agent playing
## Usage
```python
model = load_from_hub(repo_id="lucasbertola/q-FrozenLake-v1-4x4-noSlipper", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
PaulineJamin/q-FrozenLake-v1-4x4-noSlippery
|
PaulineJamin
| 2023-06-25T13:16:52Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T12:25:16Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="PaulineJamin/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
S3S3/q-Taxi-v3
|
S3S3
| 2023-06-25T13:05:40Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T13:05:36Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.44 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="S3S3/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
paramrah/shoesv2
|
paramrah
| 2023-06-25T13:00:03Z | 2 | 0 |
tf-keras
|
[
"tf-keras",
"mobilenet",
"image-classification",
"region:us"
] |
image-classification
| 2023-06-25T12:59:39Z |
---
pipeline_tag: image-classification
---
|
bogdancazan/bart-base-newsela-biendata-with-domain-adaptation
|
bogdancazan
| 2023-06-25T12:57:32Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-19T14:35:21Z |
training_args = TrainingArguments(
output_dir='bart-base-newsela-biendata-with-domain-adaptation',
num_train_epochs=20,
warmup_steps=250,
per_device_train_batch_size=BATCH_SIZE,
weight_decay=0.01,
learning_rate=2e-4,
fp16=True,
optim="adafactor",
)
Step Training Loss
500 5.677000
1000 2.361900
1500 1.826000
2000 1.672900
2500 1.597900
3000 1.555700
3500 1.520600
4000 1.496300
4500 1.476800
TrainOutput(global_step=4640, training_loss=2.1116079396214977, metrics={'train_runtime': 1059.6025, 'train_samples_per_second': 279.992, 'train_steps_per_second': 4.379, 'total_flos': 0.0, 'train_loss': 2.1116079396214977, 'epoch': 20.0})
|
S3S3/q-FrozenLake-v1-4x4-noSlippery
|
S3S3
| 2023-06-25T12:53:11Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T12:53:07Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="S3S3/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
VilohitT/question_answering_majorproject_2nd
|
VilohitT
| 2023-06-25T12:47:34Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-25T12:47:29Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
AtomGradient/Adjust_ChatGLM_6B
|
AtomGradient
| 2023-06-25T12:45:31Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"chatglm",
"feature-extraction",
"custom_code",
"license:other",
"region:us"
] |
feature-extraction
| 2023-06-25T12:04:00Z |
---
license: other
---
```
from transformers import AutoConfig, AutoModel, AutoTokenizer
import os
import torch
# ่ฝฝๅ
ฅTokenizer
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)
config = AutoConfig.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True, pre_seq_len=128)
model = AutoModel.from_pretrained("THUDM/chatglm-6b", config=config, trust_remote_code=True)
prefix_state_dict = torch.load(os.path.join("./Adjust_ChatGLM_6B/", "pytorch_model.bin"))
new_prefix_state_dict = {}
for k, v in prefix_state_dict.items():
if k.startswith("transformer.prefix_encoder."):
new_prefix_state_dict[k[len("transformer.prefix_encoder."):]] = v
model.transformer.prefix_encoder.load_state_dict(new_prefix_state_dict)
model = model.quantize(4)
model = model.half().cuda()
model.transformer.prefix_encoder.float()
model = model.eval()
response, history = model.chat(tokenizer, "็ๆ่กฌ่กฃ็ๅนฟๅ่ฏ", history=[])
print(response)
```
|
TheBloke/vicuna-13b-v1.3.0-GGML
|
TheBloke
| 2023-06-25T12:41:16Z | 0 | 16 | null |
[
"arxiv:2302.13971",
"arxiv:2306.05685",
"license:other",
"region:us"
] | null | 2023-06-25T10:52:15Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# LmSys' Vicuna 13B v1.3 GGML
These files are GGML format model files for [LmSys' Vicuna 13B v1.3](https://huggingface.co/lmsys/vicuna-13b-v1.3).
**NOTE**: This model was recently updated by the LmSys Team. If you already downloaded Vicuna 13B v1.3 GPTQ or GGML, you may want to re-download it from this repo, as the weights were updated. The original model I uploaded has been renamed to v1.3-preview.
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/vicuna-13b-v1.3.0-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/vicuna-13b-v1.3.0-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lmsys/vicuna-13b-v1.3)
## Prompt template
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: prompt
ASSISTANT:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| vicuna-13b-v1.3.0.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB | 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| vicuna-13b-v1.3.0.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB | 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| vicuna-13b-v1.3.0.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB | 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| vicuna-13b-v1.3.0.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB | 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| vicuna-13b-v1.3.0.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | Original llama.cpp quant method, 4-bit. |
| vicuna-13b-v1.3.0.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| vicuna-13b-v1.3.0.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB | 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| vicuna-13b-v1.3.0.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB | 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| vicuna-13b-v1.3.0.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| vicuna-13b-v1.3.0.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| vicuna-13b-v1.3.0.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB | 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| vicuna-13b-v1.3.0.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB | 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| vicuna-13b-v1.3.0.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| vicuna-13b-v1.3.0.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m vicuna-13b-v1.3.0.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "USER: Write a story about llamas\nASSISTANT:"
```
If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: LmSys' Vicuna 13B v1.3
# Vicuna Model Card
## Model Details
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
- **Developed by:** [LMSYS](https://lmsys.org/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
### Model Sources
- **Repository:** https://github.com/lm-sys/FastChat
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
- **Paper:** https://arxiv.org/abs/2306.05685
- **Demo:** https://chat.lmsys.org/
## Uses
The primary use of Vicuna is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## How to Get Started with the Model
Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
## Training Details
Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
The training data is around 140K conversations collected from ShareGPT.com.
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Evaluation
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Difference between different versions of Vicuna
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
|
emilianJR/HRA_hyperrealism_art
|
emilianJR
| 2023-06-25T12:30:23Z | 52 | 2 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-25T12:20:01Z |
---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Diffuser model for this SD checkpoint:
https://civitai.com/models/80515/hrahyperrealism-art
**emilianJR/HRA_hyperrealism_art** is the HuggingFace diffuser that you can use with **diffusers.StableDiffusionPipeline()**.
Examples | Examples | Examples
---- | ---- | ----
 |  | 
 |  | 
-------
## ๐งจ Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "emilianJR/HRA_hyperrealism_art"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "YOUR PROMPT"
image = pipe(prompt).images[0]
image.save("image.png")
```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
Luke537/image_classification_food_model
|
Luke537
| 2023-06-25T12:30:18Z | 189 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:food101",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-24T19:15:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- food101
metrics:
- accuracy
model-index:
- name: image_classification_food_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: food101
type: food101
config: default
split: train[:5000]
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.893
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6474
- Accuracy: 0.893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7587 | 0.99 | 62 | 2.5481 | 0.844 |
| 1.8903 | 2.0 | 125 | 1.8096 | 0.874 |
| 1.6502 | 2.98 | 186 | 1.6474 | 0.893 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.13.0
- Tokenizers 0.13.3
|
emilianJR/majicMIX_realistic_v6
|
emilianJR
| 2023-06-25T12:26:15Z | 69 | 14 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-18T12:42:51Z |
---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Diffuser model for this SD checkpoint:
https://civitai.com/models/43331/majicmix-realistic
**emilianJR/majicMIX_realistic_v6** is the HuggingFace diffuser that you can use with **diffusers.StableDiffusionPipeline()**.
Examples | Examples | Examples
---- | ---- | ----
 |  | 
 |  | 
-------
## ๐งจ Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "emilianJR/majicMIX_realistic_v6"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "YOUR PROMPT"
image = pipe(prompt).images[0]
image.save("image.png")
```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
bogdancazan/t5-base-newsela-biendata-with-domain-adaptation
|
bogdancazan
| 2023-06-25T12:24:30Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-19T13:46:06Z |
training_args = TrainingArguments(
output_dir='t5-base-wikilarge-newsela-with-domain-adaptation',
num_train_epochs=20,
warmup_steps=250,
per_device_train_batch_size=BATCH_SIZE,
weight_decay=0.01,
learning_rate=2e-4,
# fp16=True,
optim="adafactor",
)
Step Training Loss
500 4.184500
1000 2.470900
1500 2.128900
2000 1.951600
2500 1.834400
3000 1.755800
3500 1.701800
4000 1.656300
4500 1.628800
TrainOutput(global_step=4640, training_loss=2.1286644540984057, metrics={'train_runtime': 4090.6694, 'train_samples_per_second': 72.526, 'train_steps_per_second': 1.134, 'total_flos': 0.0, 'train_loss': 2.1286644540984057, 'epoch': 20.0})
|
Smaraa/bart-text-simplification_1e4_adafactor
|
Smaraa
| 2023-06-25T11:45:02Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-24T11:26:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-text-simplification_1e4_adafactor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-text-simplification_1e4_adafactor
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8377
- Rouge1: 60.5348
- Rouge2: 41.6762
- Rougel: 55.5994
- Rougelsum: 55.5841
- Gen Len: 18.7487
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.1741 | 1.0 | 1163 | 0.6416 | 62.4 | 44.1316 | 57.9029 | 57.8644 | 18.8482 |
| 0.1553 | 2.0 | 2326 | 0.6504 | 62.2879 | 43.9281 | 57.4714 | 57.461 | 18.8063 |
| 0.1369 | 3.0 | 3489 | 0.6656 | 61.2481 | 42.605 | 56.5118 | 56.4636 | 18.733 |
| 0.1286 | 4.0 | 4652 | 0.6906 | 61.3015 | 42.1608 | 56.2688 | 56.1707 | 18.7487 |
| 0.1141 | 5.0 | 5815 | 0.7082 | 62.1771 | 43.1481 | 57.0231 | 57.0673 | 18.911 |
| 0.1016 | 6.0 | 6978 | 0.7188 | 61.408 | 42.2759 | 56.1699 | 56.1779 | 18.8377 |
| 0.0961 | 7.0 | 8141 | 0.7334 | 60.802 | 41.9149 | 56.0171 | 56.0279 | 18.8168 |
| 0.0869 | 8.0 | 9304 | 0.7509 | 60.6564 | 41.3587 | 55.4436 | 55.468 | 18.7382 |
| 0.0783 | 9.0 | 10467 | 0.7713 | 60.3551 | 41.8074 | 55.6856 | 55.679 | 18.7173 |
| 0.0751 | 10.0 | 11630 | 0.7785 | 60.378 | 41.6134 | 55.5217 | 55.505 | 18.8325 |
| 0.0679 | 11.0 | 12793 | 0.7835 | 60.5835 | 41.6735 | 55.5469 | 55.5791 | 18.7435 |
| 0.0619 | 12.0 | 13956 | 0.8012 | 60.8152 | 41.2014 | 55.7186 | 55.7233 | 18.9424 |
| 0.0611 | 13.0 | 15119 | 0.8091 | 60.8188 | 41.8074 | 55.6684 | 55.8026 | 18.7958 |
| 0.0568 | 14.0 | 16282 | 0.8175 | 60.9209 | 41.5689 | 55.8838 | 55.8642 | 18.7277 |
| 0.0527 | 15.0 | 17445 | 0.8250 | 61.0215 | 41.9079 | 55.9018 | 55.8709 | 18.9162 |
| 0.0524 | 16.0 | 18608 | 0.8317 | 60.8214 | 41.6554 | 55.8053 | 55.7947 | 18.7277 |
| 0.0504 | 17.0 | 19771 | 0.8310 | 60.6533 | 41.6507 | 55.9289 | 55.9426 | 18.7958 |
| 0.0486 | 18.0 | 20934 | 0.8345 | 60.4722 | 41.5319 | 55.3384 | 55.3655 | 18.6859 |
| 0.0491 | 19.0 | 22097 | 0.8379 | 60.4012 | 41.2452 | 55.5059 | 55.5553 | 18.8115 |
| 0.0489 | 20.0 | 23260 | 0.8377 | 60.5348 | 41.6762 | 55.5994 | 55.5841 | 18.7487 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
alfredplpl/unlimited-1-0
|
alfredplpl
| 2023-06-25T11:44:51Z | 34 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"arxiv:2112.10752",
"arxiv:2212.03860",
"license:other",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-25T11:21:59Z |
---
license: other
tags:
- stable-diffusion
- text-to-image
inference: false
---
# Unlimited 1.0 Model Card

Title: Unleash your limit.
English version is [here](README_en.md).
# ใฏใใใซ
Unlimitedใฏใ
ๆ
ๅ ฑๆผๆดฉใใNovel AI Diffusionใฎไปฃใใใจใชใใใใซ
้็บใใใAIใขใผใใซ็นๅใใ็ปๅ็ๆAIใงใใ
# ใฉใคใปใณในใซใคใใฆ
ใฉใคใปใณในใซใคใใฆใฏใใใจใฎใฉใคใปใณใน CreativeML Open RAIL++-M License ใซไพๅคใ้คใๅ็จๅฉ็จ็ฆๆญขใ่ฟฝๅ ใใใ ใใงใใ
ไพๅคใ้คใๅ็จๅฉ็จ็ฆๆญขใ่ฟฝๅ ใใ็็ฑใฏๅตไฝๆฅญ็ใซๆชๅฝฑ้ฟใๅใผใใใญใชใใจใใๆธๅฟตใใใงใใ
ๅถๅฉไผๆฅญใซใใๆนใฏๆณๅ้จใซใใไบบใจ็ธ่ซใใฆใใ ใใใ
่ถฃๅณใงๅฉ็จใใๆนใฏใใพใๆฐใซใใชใใฆใไธ่ฌๅธธ่ญใๅฎใใใไฝฟใใใ ใใใ
**ใชใใๅ็จๅฉ็จใใใๆนใฏๅฅ้ใใกใ (ozaki.yasunori@outlook.com) ใซใ็ธ่ซใใ ใใใ**
# ๆณๅพใซใคใใฆ
ๆฌใขใใซใฏๆฅๆฌใซใฆไฝๆใใใพใใใใใใใฃใฆใๆฅๆฌใฎๆณๅพใ้ฉ็จใใใพใใ
ๆฌใขใใซใฎๅญฆ็ฟใฏใ่ไฝๆจฉๆณ็ฌฌ30ๆกใฎ4ใซๅบใฅใใๅๆณใงใใใจไธปๅผตใใพใใ
ใพใใๆฌใขใใซใฎ้
ๅธใซใคใใฆใฏใ่ไฝๆจฉๆณใๅๆณ175ๆกใซ็
งใใใฆใฟใฆใใ
ๆญฃ็ฏใๅนๅฉ็ฏใซใ่ฉฒๅฝใใชใใจไธปๅผตใใพใใ่ฉณใใใฏๆฟๆฒผๅผ่ญทๅฃซใฎ[่ฆ่งฃ](https://twitter.com/tka0120/status/1601483633436393473?s=20&t=yvM9EX0Em-_7lh8NJln3IQ)ใๅพก่ฆงใใ ใใใ
ใใ ใใใฉใคใปใณในใซใใใ้ใใๆฌใขใใซใฎ็ๆ็ฉใฏๅ็จฎๆณไปคใซๅพใฃใฆๅใๆฑใฃใฆไธใใใ
# ไฝฟใๆน
ใขใใซใฏ[safetensorsๅฝขๅผ](unlimited_1_0.safetensors)ใใใใฆใณใญใผใใงใใพใใ
ไปฅไธใไธ่ฌ็ใชใขใใซใซใผใใฎๆฅๆฌ่ช่จณใงใใ
## ใขใใซ่ฉณ็ดฐ
- **ใขใใซใฟใคใ:** ๆกๆฃใขใใซใใผในใฎ text-to-image ็ๆใขใใซ
- **่จ่ช:** ๆฅๆฌ่ช
- **ใฉใคใปใณใน:** CreativeML Open RAIL++-M-NC License
- **ใขใใซใฎ่ชฌๆ:** ใใฎใขใใซใฏใใญใณใใใซๅฟใใฆ้ฉๅใช็ปๅใ็ๆใใใใจใใงใใพใใใขใซใดใชใบใ ใฏ [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) ใจ [OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip) ใงใใ
- **่ฃ่ถณ:**
- **ๅ่ๆ็ฎ:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## ใขใใซใฎไฝฟ็จไพ
Stable Diffusion v2ใจๅใไฝฟใๆนใงใใ
ใใใใใฎๆนๆณใใใใพใใใ๏ผใคใฎใใฟใผใณใๆไพใใพใใ
- Web UI
- Diffusers
### Web UIใฎๅ ดๅ
Stable Diffusion v2 ใฎไฝฟใๆนใจๅใใใsafetensorๅฝขๅผใฎใขใใซใใกใคใซใใขใใซใใฉใซใใซๅ
ฅใใฆใใ ใใใ
่ฉณใใใคใณในใใผใซๆนๆณใฏใ[ใใกใใฎ่จไบ](https://note.com/it_navi/n/n6ffb66513769)ใๅ็
งใใฆใใ ใใใ
ใชใใxformersใใคใณในใใผใซใใ--xformers --disable-nan-checkใชใใทใงใณใใชใณใซใใใใจใใใใใใใพใใใใใงใชใๅ ดๅใฏ--no-halfใชใใทใงใณใใชใณใซใใฆใใ ใใใ
### Diffusersใฎๅ ดๅ
[๐ค's Diffusers library](https://github.com/huggingface/diffusers) ใไฝฟใฃใฆใใ ใใใ
ใพใใฏใไปฅไธใฎในใฏใชใใใๅฎ่กใใใฉใคใใฉใชใใใใฆใใ ใใใ
```bash
pip install --upgrade git+https://github.com/huggingface/diffusers.git transformers accelerate scipy
```
ๆฌกใฎในใฏใชใใใๅฎ่กใใ็ปๅใ็ๆใใฆใใ ใใใ
```python
from diffusers import StableDiffusionPipeline, EulerAncestralDiscreteScheduler
import torch
model_id = "alfredplpl/unlimited-1-0"
scheduler = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "masterpiece, anime, close up, white short hair, red eyes, 1girl, solo, red roses"
negative_prompt="lowres , kanji, monochrome, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, jpeg, humpbacked, long body, long neck, ((jpeg artifacts)), ((censored)), ((bad aesthetic))"
images = pipe(prompt,negative_prompt=negative_prompt, num_inference_steps=30).images
images[0].save("girl.png")
```
**ๆณจๆ**:
- [xformers](https://github.com/facebookresearch/xformers) ใไฝฟใใจๆฉใใชใใพใใ
- GPUใไฝฟใ้ใซGPUใฎใกใขใชใๅฐใชใไบบใฏ `pipe.enable_attention_slicing()` ใไฝฟใฃใฆใใ ใใใ
#### ๆณๅฎใใใ็จ้
- ่ชๅทฑ่กจ็พ
- ใใฎAIใไฝฟใใใใใชใใใใใใ็บไฟกใใใใจ
- ็ปๅ็ๆAIใซ้ขใใๅ ฑ้
- ๅ
ฌๅ
ฑๆพ้ใ ใใงใชใใๅถๅฉไผๆฅญใงใๅฏ่ฝ
- ็ปๅๅๆAIใซ้ขใใๆ
ๅ ฑใใ็ฅใๆจฉๅฉใใฏๅตไฝๆฅญ็ใซๆชๅฝฑ้ฟใๅใผใใชใใจๅคๆญใใใใใงใใใพใใๅ ฑ้ใฎ่ช็ฑใชใฉใๅฐ้ใใพใใใ
- ็ ็ฉถ้็บ
- Discordไธใงใฎใขใใซใฎๅฉ็จ
- ใใญใณใใใจใณใธใใขใชใณใฐ
- ใใกใคใณใใฅใผใใณใฐ๏ผ่ฟฝๅ ๅญฆ็ฟใจใ๏ผ
- DreamBooth ใชใฉ
- ไปใฎใขใใซใจใฎใใผใธ
- ๆฌใขใใซใฎๆง่ฝใFIDใชใฉใง่ชฟในใใใจ
- ๆฌใขใใซใStable Diffusionไปฅๅคใฎใขใใซใจใฏ็ฌ็ซใงใใใใจใใใงใใฏใตใ ใใใใทใฅ้ขๆฐใชใฉใง่ชฟในใใใจ
- ๆ่ฒ
- ็พๅคง็ใๅฐ้ๅญฆๆ ก็ใฎๅๆฅญๅถไฝ
- ๅคงๅญฆ็ใฎๅๆฅญ่ซๆใ่ชฒ้กๅถไฝ
- ๅ
็ใ็ปๅ็ๆAIใฎ็พ็ถใไผใใใใจ
- Hugging Face ใฎ Community ใซใใใฆใใ็จ้
- ๆฅๆฌ่ชใ่ฑ่ชใง่ณชๅใใฆใใ ใใ
#### ๆณๅฎใใใชใ็จ้
- ็ฉไบใไบๅฎใจใใฆ่กจ็พใใใใใชใใจ
- ๅ็ๅใใใฆใใYouTubeใชใฉใฎใณใณใใณใใธใฎไฝฟ็จ
- ๅ็จใฎใตใผใในใจใใฆ็ดๆฅๆไพใใใใจ
- ๅ
็ใๅฐใใใใใใชใใจ
- ใใฎไปใๅตไฝๆฅญ็ใซๆชๅฝฑ้ฟใๅใผใใใจ
# ไฝฟ็จใใฆใฏใใใชใ็จ้ใๆชๆใฎใใ็จ้
- ใใธใฟใซ่ดไฝ ([Digital Forgery](https://arxiv.org/abs/2212.03860)) ใฏๅ
ฌ้ใใชใใงใใ ใใ๏ผ่ไฝๆจฉๆณใซ้ๅใใใใใ๏ผ
- ไปไบบใฎไฝๅใ็กๆญใงImage-to-Imageใใชใใงใใ ใใ๏ผ่ไฝๆจฉๆณใซ้ๅใใใใใ๏ผ
- ใใใใค็ฉใ้ ๅธใใชใใงใใ ใใ (ๅๆณ175ๆกใซ้ๅใใใใใ๏ผ
- ใใใใๆฅญ็ใฎใใใผใๅฎใใชใใใใชใใจ
- ไบๅฎใซๅบใฅใใชใใใจใไบๅฎใฎใใใซ่ชใใชใใใใซใใฆใใ ใใ๏ผๅจๅๆฅญๅๅฆจๅฎณ็ฝชใ้ฉ็จใใใใใใ๏ผ
- ใใงใคใฏใใฅใผใน
## ใขใใซใฎ้็ใใใคใขใน
### ใขใใซใฎ้็
- ๆกๆฃใขใใซใๅคง่ฆๆจก่จ่ชใขใใซใฏใใใพใ ใซๆช็ฅใฎ้จๅใๅคใใใใฎ้็ใฏๅคๆใใฆใใชใใ
### ใใคใขใน
- ๆกๆฃใขใใซใๅคง่ฆๆจก่จ่ชใขใใซใฏใใใพใ ใซๆช็ฅใฎ้จๅใๅคใใใใคใขในใฏๅคๆใใฆใใชใใ
## ๅญฆ็ฟ
**ๅญฆ็ฟใใผใฟ**
ๅฝๅ
ๆณใซๆบๆ ใใใใผใฟใจใขใใซใ
**ๅญฆ็ฟใใญใปใน**
- **ใใผใใฆใงใข:** A6000x2
## ่ฉไพก็ตๆ
็ฌฌไธ่
ใซใใ่ฉไพกใๆฑใใฆใใพใใ
## ็ฐๅขใธใฎๅฝฑ้ฟ
- **ใใผใใฆใงใขใฟใคใ:** A6000x2
- **ไฝฟ็จๆ้๏ผๅไฝใฏๆ้๏ผ:** 1000
- **ๅญฆ็ฟใใๅ ดๆ:** ๆฅๆฌ
## ๅ่ๆ็ฎ
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
*ใใฎใขใใซใซใผใใฏ [Stable Diffusion v2](https://huggingface.co/stabilityai/stable-diffusion-2/raw/main/README.md) ใซๅบใฅใใฆๆธใใใพใใใ
|
PraveenJesu/openai-whisper-medium-peft-lora-colab
|
PraveenJesu
| 2023-06-25T11:43:36Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-25T11:43:33Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
Erfan2001/distilbert_NoTokenized
|
Erfan2001
| 2023-06-25T11:43:35Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-24T22:00:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xxx
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xxx
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6856
- Accuracy: 0.7758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7996 | 1.0 | 4284 | 0.7921 | 0.7287 |
| 0.5539 | 2.0 | 8568 | 0.6856 | 0.7758 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
edfryo/bangkelser
|
edfryo
| 2023-06-25T11:39:27Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-05-09T11:58:00Z |
---
license: bigscience-openrail-m
---
|
jondurbin/airoboros-13b-gpt4-1.4-fp16
|
jondurbin
| 2023-06-25T11:39:17Z | 1,423 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.4",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T10:46:42Z |
---
license: other
datasets:
- jondurbin/airoboros-gpt4-1.4
---
float16 version of https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.4
|
Ryukijano/DialoGPT_med_model
|
Ryukijano
| 2023-06-25T11:38:19Z | 118 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-24T12:37:08Z |
Hello there , this bot is trained on DialoGTP for an epoch of 45
|
jiyuanq/falcon-40b-instruct-gptq-128g-act
|
jiyuanq
| 2023-06-25T10:35:13Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"RefinedWeb",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T08:31:32Z |
---
library_name: transformers
pipeline_tag: text-generation
---
falcon-40b-instruct quantized with GPTQ using the script in https://github.com/huggingface/text-generation-inference/pull/438
- group size: 128
- act order: true
- nsamples: 128
- dataset: wikitext2
|
abhishek-kumar/dreambooth_test
|
abhishek-kumar
| 2023-06-25T10:34:42Z | 30 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-24T16:02:54Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - abhishek-kumar/output
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Omogo/xlm-roberta-base-finetuned-panx-de
|
Omogo
| 2023-06-25T10:27:58Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-25T07:39:34Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8602627537962806
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1355
- F1: 0.8603
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2574 | 1.0 | 525 | 0.1627 | 0.8221 |
| 0.1295 | 2.0 | 1050 | 0.1435 | 0.8467 |
| 0.0815 | 3.0 | 1575 | 0.1355 | 0.8603 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
TheBloke/orca_mini_3B-GGML
|
TheBloke
| 2023-06-25T10:25:04Z | 0 | 59 |
transformers
|
[
"transformers",
"en",
"dataset:psmathur/alpaca_orca",
"dataset:psmathur/dolly-v2_orca",
"dataset:psmathur/WizardLM_Orca",
"arxiv:2306.02707",
"license:mit",
"region:us"
] | null | 2023-06-24T22:33:56Z |
---
inference: false
license: mit
language:
- en
library_name: transformers
datasets:
- psmathur/alpaca_orca
- psmathur/dolly-v2_orca
- psmathur/WizardLM_Orca
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Pankaj Mathur's Orca Mini 3B GGML
These files are GGML format model files for [Pankaj Mathur's Orca Mini 3B](https://huggingface.co/psmathur/orca_mini_3b).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/orca_mini_3B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/psmathur/orca_mini_3b)
## Prompt template:
```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
prompt
### Response:
```
or
```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
prompt
### Input:
input
### Response:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These cannot be provided with Open Llama 3B models at this time, due to an issue in llama.cpp.
This is being worked on in the llama.cpp repo. More issues here: https://github.com/ggerganov/llama.cpp/issues/1919
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| orca-mini-3b.ggmlv3.q4_0.bin | q4_0 | 4 | 1.93 GB | 4.43 GB | Original llama.cpp quant method, 4-bit. |
| orca-mini-3b.ggmlv3.q4_1.bin | q4_1 | 4 | 2.14 GB | 4.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| orca-mini-3b.ggmlv3.q5_0.bin | q5_0 | 5 | 2.36 GB | 4.86 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| orca-mini-3b.ggmlv3.q5_1.bin | q5_1 | 5 | 2.57 GB | 5.07 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| orca-mini-3b.ggmlv3.q8_0.bin | q8_0 | 8 | 3.64 GB | 6.14 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m orca-mini-3b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\nYou are an story writing assistant who writes very long, detailed and interesting stories\n\n### User:\nWrite a story about llamas\n\n### Input:\n{input}\n\n### Response:\n"
```
If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Pankaj Mathur's Orca Mini 3B
# orca_mini_3b
An [OpenLLaMa-3B model](https://github.com/openlm-research/open_llama) model trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches.
# Dataset
We build explain tuned [WizardLM dataset ~70K](https://github.com/nlpxucan/WizardLM), [Alpaca dataset ~52K](https://crfm.stanford.edu/2023/03/13/alpaca.html) & [Dolly-V2 dataset ~15K](https://github.com/databrickslabs/dolly) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707).
We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets.
This helps student model aka this model to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version).
Please see below example usage how the **System** prompt is added before each **instruction**.
# Training
The training configurations are provided in the table below.
The training takes on 8x A100(80G) GPUs and lasts for around 4 Hours for cost of $48 using [Lambda Labs](https://lambdalabs.com)
We used DeepSpeed with fully sharded data parallelism, also know as [ZeRO stage 3](https://engineering.fb.com/2021/07/15/open-source/fsdp/) by writing our own fine tunning scripts plus leveraging some of the model training code provided by amazing [OpenAlpaca repo](https://github.com/yxuansu/OpenAlpaca)
Here are some of params used during training:
|||
|:-------------:|:-------------:|
|*batch_size*|64|
|*train_micro_batch_size_per_gpu*|4|
|*gradient_accumulation_steps*|2|
|*Learning rate*|2e-5|
|*Max length*|1024|
|*Epochs*|3|
|*Optimizer*|AdamW|
# Example Usage
Below shows an example on how to use this model
```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer
# Hugging Face model_path
model_path = 'psmathur/orca_mini_3b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
#generate text function
def generate_text(system, instruction, input=None):
if input:
prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
else:
prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Response:\n"
tokens = tokenizer.encode(prompt)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to('cuda')
instance = {'input_ids': tokens,'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024, 'top_k': 50}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length+instance['generate_len'],
use_cache=True,
do_sample=True,
top_p=instance['top_p'],
temperature=instance['temperature'],
top_k=instance['top_k']
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f'[!] Response: {string}'
# Sample Test Instruction Used by Youtuber Sam Witteveen https://www.youtube.com/@samwitteveenai
system = 'You are an AI assistant that follows instruction extremely well. Help as much as you can.'
instruction = 'Write a letter to Sam Altman, CEO of OpenAI, requesting him to convert GPT4 a private model by OpenAI to an open source project'
print(generate_text(system, instruction))
```
```
[!] Response:
Dear Sam Altman,
I am writing to request that you convert the GPT4 private model developed by OpenAI to an open source project. As a user of OpenAI, I have been waiting for the day when I can use the advanced natural language processing capabilities of GPT4 in a more open and accessible way.
While OpenAI has made significant progress in developing AI applications, it has primarily focused on building private models that are not accessible to the general public. However, with the recent release of GPT-3, there is a growing demand for more open and accessible AI tools.
Converting GPT4 to an open source project would allow for greater transparency, collaboration, and innovation. It would also help to build trust in the technology and ensure that it is used ethically and responsibly.
I urge you to consider converting GPT4 to an open source project. This would be a significant contribution to the AI community and would help to create a more open and accessible future.
Thank you for your consideration.
Sincerely,
[Your Name]
```
**P.S. I am #opentowork and #collaboration, if you can help, please reach out to me at psmathur.public@gmail.com**
Next Goals:
1) Try more data like actually using FLAN-v2, just like Orka Research Paper (I am open for suggestions)
2) Provide more options for Text generation UI. (may be https://github.com/oobabooga/text-generation-webui)
3) Provide 4bit GGML/GPTQ quantized model (may be [TheBloke](https://huggingface.co/TheBloke) can help here)
Limitations & Biases:
This model can produce factually incorrect output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Disclaimer:
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model.
Please cosult an attorney before using this model for commercial purposes.
Citiation:
If you found wizardlm_alpaca_dolly_orca_open_llama_3b useful in your research or applications, please kindly cite using the following BibTeX:
```
@misc{wizardlm_alpaca_dolly_orca_open_llama_3b,
author = {Pankaj Mathur},
title = {wizardlm_alpaca_dolly_orca_open_llama_3b: An explain tuned OpenLLaMA-3b model on custom wizardlm, alpaca, & dolly datasets},
year = {2023},
publisher = {GitHub, HuggingFace},
journal = {GitHub repository, HuggingFace repository},
howpublished = {\url{https://github.com/pankajarm/wizardlm_alpaca_dolly_orca_open_llama_3b}, \url{https://https://huggingface.co/psmathur/wizardlm_alpaca_dolly_orca_open_llama_3b}},
}
```
```
@software{openlm2023openllama,
author = {Xinyang Geng and Hao Liu},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@misc{openalpaca,
author = {Yixuan Su and Tian Lan and Deng Cai},
title = {OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/yxuansu/OpenAlpaca}},
}
```
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
|
Sp1786/mutliclass-sentiment-analysis-bert
|
Sp1786
| 2023-06-25T10:22:55Z | 4 | 0 |
transformers
|
[
"transformers",
"bert",
"code",
"text-classification",
"en",
"dataset:Sp1786/multiclass-sentiment-analysis-dataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-21T11:23:59Z |
---
license: apache-2.0
datasets:
- Sp1786/multiclass-sentiment-analysis-dataset
language:
- en
metrics:
- bleu
- sacrebleu
library_name: transformers
pipeline_tag: text-classification
tags:
- code
---
|
kbondar17/test-trainer
|
kbondar17
| 2023-06-25T10:12:41Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-25T10:06:32Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: test-trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-trainer
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4009
- F1: 0.6363
- Roc Auc: 0.7682
- Accuracy: 0.6079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 125 | 0.2975 | 0.5710 | 0.7129 | 0.4693 |
| No log | 2.0 | 250 | 0.3742 | 0.6226 | 0.7621 | 0.6013 |
| No log | 3.0 | 375 | 0.4009 | 0.6363 | 0.7682 | 0.6079 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dhruvil237/userutterance_classification_verplus
|
dhruvil237
| 2023-06-25T10:05:26Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"doi:10.57967/hf/0811",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-05T12:20:52Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: userutterance_classification_verplus
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9619354838709677
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# userutterance_classification_verplus
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2270
- Accuracy: 0.9619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 5.0219 | 0.21 | 200 | 4.9813 | 0.0077 |
| 4.8915 | 0.42 | 400 | 4.5741 | 0.1155 |
| 4.2736 | 0.63 | 600 | 3.5359 | 0.4719 |
| 3.2701 | 0.84 | 800 | 2.4291 | 0.7429 |
| 2.3578 | 1.05 | 1000 | 1.5793 | 0.8413 |
| 1.5695 | 1.26 | 1200 | 1.0029 | 0.8994 |
| 1.0412 | 1.47 | 1400 | 0.6475 | 0.9187 |
| 0.7034 | 1.68 | 1600 | 0.4439 | 0.9303 |
| 0.501 | 1.89 | 1800 | 0.3400 | 0.9381 |
| 0.3187 | 2.1 | 2000 | 0.2793 | 0.9439 |
| 0.2185 | 2.31 | 2200 | 0.2538 | 0.9490 |
| 0.1669 | 2.52 | 2400 | 0.2210 | 0.9523 |
| 0.1081 | 2.73 | 2600 | 0.2225 | 0.9519 |
| 0.1004 | 2.94 | 2800 | 0.2136 | 0.9555 |
| 0.0665 | 3.14 | 3000 | 0.2078 | 0.9561 |
| 0.0509 | 3.35 | 3200 | 0.2155 | 0.9568 |
| 0.05 | 3.56 | 3400 | 0.2107 | 0.9581 |
| 0.0527 | 3.77 | 3600 | 0.2171 | 0.9568 |
| 0.0447 | 3.98 | 3800 | 0.2128 | 0.9590 |
| 0.0259 | 4.19 | 4000 | 0.2099 | 0.9587 |
| 0.0279 | 4.4 | 4200 | 0.2179 | 0.9577 |
| 0.0176 | 4.61 | 4400 | 0.2191 | 0.9574 |
| 0.0288 | 4.82 | 4600 | 0.2216 | 0.9590 |
| 0.0328 | 5.03 | 4800 | 0.2237 | 0.9606 |
| 0.0154 | 5.24 | 5000 | 0.2241 | 0.9616 |
| 0.0157 | 5.45 | 5200 | 0.2265 | 0.9603 |
| 0.023 | 5.66 | 5400 | 0.2276 | 0.9613 |
| 0.0178 | 5.87 | 5600 | 0.2270 | 0.9619 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mrizalf7/xlm-r-qa-squad-retrained
|
mrizalf7
| 2023-06-25T09:57:29Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-13T19:17:39Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-finetuned-small-squad-indonesian-rizal-4-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-finetuned-small-squad-indonesian-rizal-4-2
This model is a fine-tuned version of [mrizalf7/xlm-roberta-finetuned-small-squad-indonesian-rizal-4](https://huggingface.co/mrizalf7/xlm-roberta-finetuned-small-squad-indonesian-rizal-4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.4152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 6.1326 |
| No log | 2.0 | 2 | 6.1326 |
| No log | 3.0 | 3 | 5.4152 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
lucasbertola/ppo-LunarLander-v2
|
lucasbertola
| 2023-06-25T09:29:20Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T11:40:03Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 295.14 +/- 14.94
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sd-concepts-library/pokemon-raichu-sd-model
|
sd-concepts-library
| 2023-06-25T09:26:29Z | 0 | 0 | null |
[
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:mit",
"region:us"
] | null | 2023-06-25T09:26:28Z |
---
license: mit
base_model: stabilityai/stable-diffusion-2
---
### Pokemon Raichu - SD model on Stable Diffusion
This is the `<cat-toy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
ahishamm/vit-base-HAM-10000-sharpened
|
ahishamm
| 2023-06-25T09:17:26Z | 190 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-25T08:42:48Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-base-HAM-10000-sharpened
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-HAM-10000-sharpened
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the ahishamm/HAM_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4392
- Accuracy: 0.8529
- Recall: 0.8529
- F1: 0.8529
- Precision: 0.8529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.7303 | 0.2 | 100 | 0.7828 | 0.7197 | 0.7197 | 0.7197 | 0.7197 |
| 0.7198 | 0.4 | 200 | 0.7519 | 0.7377 | 0.7377 | 0.7377 | 0.7377 |
| 0.7519 | 0.6 | 300 | 0.7125 | 0.7541 | 0.7541 | 0.7541 | 0.7541 |
| 0.6657 | 0.8 | 400 | 0.6623 | 0.7571 | 0.7571 | 0.7571 | 0.7571 |
| 0.5896 | 1.0 | 500 | 0.5964 | 0.7835 | 0.7835 | 0.7835 | 0.7835 |
| 0.515 | 1.2 | 600 | 0.5745 | 0.8015 | 0.8015 | 0.8015 | 0.8015 |
| 0.4318 | 1.4 | 700 | 0.5061 | 0.8200 | 0.8200 | 0.8200 | 0.8200 |
| 0.4299 | 1.6 | 800 | 0.5239 | 0.8075 | 0.8075 | 0.8075 | 0.8075 |
| 0.4793 | 1.8 | 900 | 0.5366 | 0.8125 | 0.8125 | 0.8125 | 0.8125 |
| 0.4202 | 2.0 | 1000 | 0.4882 | 0.8244 | 0.8244 | 0.8244 | 0.8244 |
| 0.2105 | 2.2 | 1100 | 0.5330 | 0.8234 | 0.8234 | 0.8234 | 0.8234 |
| 0.2597 | 2.4 | 1200 | 0.4604 | 0.8369 | 0.8369 | 0.8369 | 0.8369 |
| 0.2261 | 2.59 | 1300 | 0.4893 | 0.8409 | 0.8409 | 0.8409 | 0.8409 |
| 0.1853 | 2.79 | 1400 | 0.4793 | 0.8494 | 0.8494 | 0.8494 | 0.8494 |
| 0.1739 | 2.99 | 1500 | 0.4392 | 0.8529 | 0.8529 | 0.8529 | 0.8529 |
| 0.0629 | 3.19 | 1600 | 0.4941 | 0.8584 | 0.8584 | 0.8584 | 0.8584 |
| 0.0802 | 3.39 | 1700 | 0.4974 | 0.8613 | 0.8613 | 0.8613 | 0.8613 |
| 0.0712 | 3.59 | 1800 | 0.5416 | 0.8594 | 0.8594 | 0.8594 | 0.8594 |
| 0.0365 | 3.79 | 1900 | 0.5318 | 0.8574 | 0.8574 | 0.8574 | 0.8574 |
| 0.0591 | 3.99 | 2000 | 0.5344 | 0.8574 | 0.8574 | 0.8574 | 0.8574 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
kchen621/ppo-LunarLander-v2
|
kchen621
| 2023-06-25T09:13:58Z | 1 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-06-19T19:22:36Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -169.25 +/- 80.22
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo_lun'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'kchen621/ppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
Ellbendls/Pixelcopter-PLE-v0
|
Ellbendls
| 2023-06-25T09:09:39Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-23T12:37:16Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 62.70 +/- 42.68
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
anas21/keras-dummy-functional-demo
|
anas21
| 2023-06-25T09:07:24Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-06-24T23:19:07Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
Shridipta-06/LunarLander-v2_unit8part1
|
Shridipta-06
| 2023-06-25T08:50:28Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T08:46:05Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -128.49 +/- 35.10
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Shridipta-06/LunarLander-v2_unit8part1'
'batch_size': 512
'minibatch_size': 128}
```
|
RoundtTble/dinov2_vits14_onnx
|
RoundtTble
| 2023-06-25T08:20:24Z | 0 | 0 | null |
[
"onnx",
"region:us"
] | null | 2023-06-24T07:10:50Z |
# dinov2_vits14
## ONNX Model
Check this [PR](https://github.com/facebookresearch/dinov2/pull/129).
## Run
Run triton container.
```
make triton
```
```
docker logs dinov2_vits14_triton
=============================
== Triton Inference Server ==
=============================
NVIDIA Release 23.04 (build 58408265)
Triton Server Version 2.33.0
Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
WARNING: CUDA Minor Version Compatibility mode ENABLED.
Using driver version 525.105.17 which has support for CUDA 12.0. This container
was built with CUDA 12.1 and will be run in Minor Version Compatibility mode.
CUDA Forward Compatibility is preferred over Minor Version Compatibility for use
with this container but was unavailable:
[[Forward compatibility was attempted on non supported HW (CUDA_ERROR_COMPAT_NOT_SUPPORTED_ON_DEVICE) cuInit()=804]]
See https://docs.nvidia.com/deploy/cuda-compatibility/ for details.
I0625 08:05:36.712010 1 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x7f6c46000000' with size 268435456
I0625 08:05:36.712625 1 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
I0625 08:05:36.717785 1 model_lifecycle.cc:459] loading: dinov2_vits14:1
I0625 08:05:36.723707 1 onnxruntime.cc:2504] TRITONBACKEND_Initialize: onnxruntime
I0625 08:05:36.723725 1 onnxruntime.cc:2514] Triton TRITONBACKEND API version: 1.12
I0625 08:05:36.723731 1 onnxruntime.cc:2520] 'onnxruntime' TRITONBACKEND API version: 1.12
I0625 08:05:36.723735 1 onnxruntime.cc:2550] backend configuration:
{"cmdline":{"auto-complete-config":"true","backend-directory":"/opt/tritonserver/backends","min-compute-capability":"6.000000","default-max-batch-size":"4"}}
I0625 08:05:36.770311 1 onnxruntime.cc:2608] TRITONBACKEND_ModelInitialize: dinov2_vits14 (version 1)
I0625 08:05:36.770781 1 onnxruntime.cc:666] skipping model configuration auto-complete for 'dinov2_vits14': inputs and outputs already specified
I0625 08:05:36.771205 1 onnxruntime.cc:2651] TRITONBACKEND_ModelInstanceInitialize: dinov2_vits14_0 (GPU device 0)
2023-06-25 08:05:37.157976034 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 465, index: 122, mask: {125, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.158142138 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 466, index: 123, mask: {62, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.158159030 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 467, index: 124, mask: {126, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.158174259 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 468, index: 125, mask: {63, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.165944431 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 344, index: 1, mask: {1, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.158230084 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 469, index: 126, mask: {127, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.169979079 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 347, index: 4, mask: {66, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.169927531 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 345, index: 2, mask: {65, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.169954703 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 346, index: 3, mask: {2, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.173982388 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 350, index: 7, mask: {4, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.173929448 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 348, index: 5, mask: {3, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.173954065 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 349, index: 6, mask: {67, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.181926759 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 351, index: 8, mask: {68, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.185932583 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 352, index: 9, mask: {5, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.189924821 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 353, index: 10, mask: {69, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.193940975 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 464, index: 121, mask: {61, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.194020786 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 357, index: 14, mask: {71, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.193940915 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 354, index: 11, mask: {6, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.193968147 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 355, index: 12, mask: {70, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.193992072 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 356, index: 13, mask: {7, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.197974211 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 360, index: 17, mask: {9, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.197928554 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 358, index: 15, mask: {8, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.197950686 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 359, index: 16, mask: {72, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.201924259 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 361, index: 18, mask: {73, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.205931957 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 362, index: 19, mask: {10, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.209926179 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 363, index: 20, mask: {74, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.213927705 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 364, index: 21, mask: {11, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.217799496 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 365, index: 22, mask: {75, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.217849460 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 366, index: 23, mask: {12, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.221966294 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 367, index: 24, mask: {76, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.221966304 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 463, index: 120, mask: {124, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.225931100 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 462, index: 119, mask: {60, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.225933645 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 368, index: 25, mask: {13, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.229929350 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 369, index: 26, mask: {77, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.233930445 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 370, index: 27, mask: {14, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.233930525 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 461, index: 118, mask: {123, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.237930518 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 371, index: 28, mask: {78, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.241927085 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 372, index: 29, mask: {15, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.245926977 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 373, index: 30, mask: {79, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.249931199 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 374, index: 31, mask: {16, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.253927515 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 375, index: 32, mask: {80, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.257925694 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 376, index: 33, mask: {17, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.261929715 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 377, index: 34, mask: {81, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.265966397 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 378, index: 35, mask: {18, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.269926725 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 379, index: 36, mask: {82, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.273931337 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 380, index: 37, mask: {19, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.281941021 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 381, index: 38, mask: {83, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.282017776 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 398, index: 55, mask: {28, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.282038465 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 382, index: 39, mask: {20, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.282090914 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 383, index: 40, mask: {84, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.286235010 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 385, index: 42, mask: {85, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.285955121 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 401, index: 58, mask: {93, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.282070957 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 399, index: 56, mask: {92, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.286082321 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 384, index: 41, mask: {21, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.285929422 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 400, index: 57, mask: {29, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.293926803 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 405, index: 62, mask: {95, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.289931018 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 402, index: 59, mask: {30, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.289956767 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 403, index: 60, mask: {94, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.301929004 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 388, index: 45, mask: {23, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.289975973 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 404, index: 61, mask: {31, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.294054945 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 406, index: 63, mask: {32, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.294078880 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 407, index: 64, mask: {96, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.314023441 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 409, index: 66, mask: {97, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.289931068 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 386, index: 43, mask: {22, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.318030297 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 411, index: 68, mask: {98, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.289956797 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 387, index: 44, mask: {86, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.301929014 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 408, index: 65, mask: {33, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.314096058 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 410, index: 67, mask: {34, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.334030890 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 414, index: 71, mask: {36, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.305931271 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 389, index: 46, mask: {87, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.321929038 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 390, index: 47, mask: {24, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.321948134 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 391, index: 48, mask: {88, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.321965006 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 392, index: 49, mask: {25, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.321981437 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 393, index: 50, mask: {89, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.321996396 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 394, index: 51, mask: {26, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322012065 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 395, index: 52, mask: {90, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322026713 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 396, index: 53, mask: {27, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322049907 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 397, index: 54, mask: {91, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322065276 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 460, index: 117, mask: {59, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322080735 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 425, index: 82, mask: {105, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322096315 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 426, index: 83, mask: {42, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322112155 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 427, index: 84, mask: {106, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322127053 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 428, index: 85, mask: {43, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322143324 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 429, index: 86, mask: {107, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322157170 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 430, index: 87, mask: {44, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322173340 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 431, index: 88, mask: {108, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322188569 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 432, index: 89, mask: {45, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322205311 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 433, index: 90, mask: {109, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322219938 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 434, index: 91, mask: {46, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322235177 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 435, index: 92, mask: {110, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322249955 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 436, index: 93, mask: {47, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322267158 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 437, index: 94, mask: {111, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322281345 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 438, index: 95, mask: {48, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322296904 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 439, index: 96, mask: {112, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322312113 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 440, index: 97, mask: {49, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322329005 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 441, index: 98, mask: {113, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322343652 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 442, index: 99, mask: {50, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322359492 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 443, index: 100, mask: {114, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322377907 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 444, index: 101, mask: {51, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322393366 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 445, index: 102, mask: {115, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322408725 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 446, index: 103, mask: {52, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322423233 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 447, index: 104, mask: {116, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322437289 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 448, index: 105, mask: {53, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322453440 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 449, index: 106, mask: {117, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322467697 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 450, index: 107, mask: {54, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322483076 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 451, index: 108, mask: {118, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322496812 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 452, index: 109, mask: {55, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.445929743 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 417, index: 74, mask: {101, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322511880 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 453, index: 110, mask: {119, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322525526 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 454, index: 111, mask: {56, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322541977 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 455, index: 112, mask: {120, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.454013818 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 422, index: 79, mask: {40, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322555663 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 456, index: 113, mask: {57, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.457932126 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 423, index: 80, mask: {104, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322571683 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 457, index: 114, mask: {121, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322585920 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 458, index: 115, mask: {58, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.318158029 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 412, index: 69, mask: {35, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.334163851 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 415, index: 72, mask: {100, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.341919085 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 416, index: 73, mask: {37, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.323408365 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 413, index: 70, mask: {99, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.453923387 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 418, index: 75, mask: {38, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.453947493 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 419, index: 76, mask: {102, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.453965727 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 420, index: 77, mask: {39, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.453991656 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 421, index: 78, mask: {103, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.458087059 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 424, index: 81, mask: {41, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.585007204 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 459, index: 116, mask: {122, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:38.570069572 [W:onnxruntime:, session_state.cc:1136 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2023-06-25 08:05:38.570088387 [W:onnxruntime:, session_state.cc:1138 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
I0625 08:05:39.975559 1 model_lifecycle.cc:694] successfully loaded 'dinov2_vits14' version 1
I0625 08:05:39.975625 1 server.cc:583]
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+
I0625 08:05:39.975662 1 server.cc:610]
+-------------+-----------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Backend | Path | Config |
+-------------+-----------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+
| onnxruntime | /opt/tritonserver/backends/onnxruntime/libtriton_onnxruntime.so | {"cmdline":{"auto-complete-config":"true","backend-directory":"/opt/tritonserver/backends","min-compute-capability":"6.000000","default-max-batch-size":"4"}} |
+-------------+-----------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+
I0625 08:05:39.975683 1 server.cc:653]
+---------------+---------+--------+
| Model | Version | Status |
+---------------+---------+--------+
| dinov2_vits14 | 1 | READY |
+---------------+---------+--------+
I0625 08:05:39.991510 1 metrics.cc:808] Collecting metrics for GPU 0: NVIDIA GeForce RTX 3090
I0625 08:05:39.992145 1 metrics.cc:701] Collecting CPU metrics
I0625 08:05:39.992360 1 tritonserver.cc:2387]
+----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Option | Value |
+----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| server_id | triton |
| server_version | 2.33.0 |
| server_extensions | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_tensor_data parameters statistics trace logging |
| model_repository_path[0] | /models |
| model_control_mode | MODE_NONE |
| strict_model_config | 0 |
| rate_limit | OFF |
| pinned_memory_pool_byte_size | 268435456 |
| cuda_memory_pool_byte_size{0} | 67108864 |
| min_supported_compute_capability | 6.0 |
| strict_readiness | 1 |
| exit_timeout | 30 |
| cache_enabled | 0 |
+----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
I0625 08:05:39.993603 1 grpc_server.cc:2450] Started GRPCInferenceService at 0.0.0.0:8001
I0625 08:05:39.993771 1 http_server.cc:3555] Started HTTPService at 0.0.0.0:8000
I0625 08:05:40.034678 1 http_server.cc:185] Started Metrics Service at 0.0.0.0:8002
```
Perf analyzer `dinov2_vits14`
```
make perf
```
```
docker run --gpus all --rm -it --net host nvcr.io/nvidia/tritonserver:23.04-py3-sdk perf_analyzer -m dinov2_vits14 --percentile=95 -i grpc -u 0.0.0.0:8001 --concurrency-range 16:16 --shape input:3,280,280
=================================
== Triton Inference Server SDK ==
=================================
NVIDIA Release 23.04 (build 58408269)
Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
WARNING: CUDA Minor Version Compatibility mode ENABLED.
Using driver version 525.105.17 which has support for CUDA 12.0. This container
was built with CUDA 12.1 and will be run in Minor Version Compatibility mode.
CUDA Forward Compatibility is preferred over Minor Version Compatibility for use
with this container but was unavailable:
[[Forward compatibility was attempted on non supported HW (CUDA_ERROR_COMPAT_NOT_SUPPORTED_ON_DEVICE) cuInit()=804]]
See https://docs.nvidia.com/deploy/cuda-compatibility/ for details.
*** Measurement Settings ***
Batch size: 1
Service Kind: Triton
Using "time_windows" mode for stabilization
Measurement window: 5000 msec
Latency limit: 0 msec
Concurrency limit: 16 concurrent requests
Using synchronous calls for inference
Stabilizing using p95 latency
Request concurrency: 16
Client:
Request count: 9403
Throughput: 522.33 infer/sec
p50 latency: 30482 usec
p90 latency: 32100 usec
p95 latency: 32564 usec
p99 latency: 34203 usec
Avg gRPC time: 30589 usec ((un)marshal request/response 93 usec + response wait 30496 usec)
Server:
Inference count: 9403
Execution count: 1177
Successful request count: 9403
Avg request latency: 24295 usec (overhead 220 usec + queue 9042 usec + compute input 1511 usec + compute infer 13485 usec + compute output 37 usec)
Inferences/Second vs. Client p95 Batch Latency
Concurrency: 16, throughput: 522.33 infer/sec, latency 32564 usec
```
|
Neupane9Sujal/Text_Summarization
|
Neupane9Sujal
| 2023-06-25T07:51:58Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"code",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-25T10:28:20Z |
---
language:
- en
metrics:
- rouge
tags:
- code
---
|
Lajonbot/Amazon-LightGPT-pl-qlora
|
Lajonbot
| 2023-06-25T07:40:56Z | 0 | 0 | null |
[
"tensorboard",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:openrail",
"region:us"
] | null | 2023-05-29T06:22:37Z |
---
license: openrail
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
language:
- pl
---
|
Lajonbot/lamini-instruct-tuned-3b-pl-lora
|
Lajonbot
| 2023-06-25T07:37:46Z | 0 | 0 | null |
[
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:openrail",
"region:us"
] | null | 2023-06-15T06:08:17Z |
---
license: openrail
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
language:
- pl
---
|
Lajonbot/stablelm-base-alpha-3b-instruct-pl-lora
|
Lajonbot
| 2023-06-25T07:37:23Z | 0 | 0 | null |
[
"tensorboard",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:openrail",
"region:us"
] | null | 2023-06-15T06:13:44Z |
---
license: openrail
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
language:
- pl
---
|
Lajonbot/polish-gpt2-small-instruct
|
Lajonbot
| 2023-06-25T07:36:40Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-20T19:33:30Z |
---
license: openrail
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
language:
- pl
---
|
Davlan/bert-base-multilingual-cased-finetuned-swahili
|
Davlan
| 2023-06-25T07:32:51Z | 568 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
Hugging Face's logo
---
language: ha
datasets:
---
# bert-base-multilingual-cased-finetuned-swahili
## Model description
**bert-base-multilingual-cased-finetuned-swahili** is a **Swahili BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Swahili language texts. It provides **better performance** than the multilingual BERT on text classification and named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Swahili corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-swahili')
>>> unmasker("Jumatatu, Bwana Kagame alielezea shirika la France24 huko [MASK] kwamba "hakuna uhalifu ulitendwa")
[{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Paris kwamba hakuna uhalifu ulitendwa',
'score': 0.31642526388168335,
'token': 10728,
'token_str': 'Paris'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Rwanda kwamba hakuna uhalifu ulitendwa',
'score': 0.15753623843193054,
'token': 57557,
'token_str': 'Rwanda'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Burundi kwamba hakuna uhalifu ulitendwa',
'score': 0.07211585342884064,
'token': 57824,
'token_str': 'Burundi'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko France kwamba hakuna uhalifu ulitendwa',
'score': 0.029844321310520172,
'token': 10688,
'token_str': 'France'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Senegal kwamba hakuna uhalifu ulitendwa',
'score': 0.0265930388122797,
'token': 38052,
'token_str': 'Senegal'}]
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Swahili CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | sw_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 86.80 | 89.36
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/xlm-roberta-base-finetuned-swahili
|
Davlan
| 2023-06-25T07:31:57Z | 119 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
Hugging Face's logo
---
language: sw
datasets:
---
# xlm-roberta-base-finetuned-swahili
## Model description
**xlm-roberta-base-finetuned-swahili** is a **Swahili RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Swahili language texts. It provides **better performance** than the XLM-RoBERTa on text classification and named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Swahili corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-swahili')
>>> unmasker("Jumatatu, Bwana Kagame alielezea shirika la France24 huko <mask> kwamba hakuna uhalifu ulitendwa")
[{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Ufaransa kwamba hakuna uhalifu ulitendwa',
'score': 0.5077782273292542,
'token': 190096,
'token_str': 'Ufaransa'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Paris kwamba hakuna uhalifu ulitendwa',
'score': 0.3657738268375397,
'token': 7270,
'token_str': 'Paris'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Gabon kwamba hakuna uhalifu ulitendwa',
'score': 0.01592041552066803,
'token': 176392,
'token_str': 'Gabon'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko France kwamba hakuna uhalifu ulitendwa',
'score': 0.010881908237934113,
'token': 9942,
'token_str': 'France'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Marseille kwamba hakuna uhalifu ulitendwa',
'score': 0.009554869495332241,
'token': 185918,
'token_str': 'Marseille'}]
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Swahili CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | sw_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 87.55 | 89.46
### BibTeX entry and citation info
By David Adelani
```
```
|
AdShenoy/Bart-samsum-fastai
|
AdShenoy
| 2023-06-25T07:20:59Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2023-06-24T06:53:20Z |
---
tags:
- fastai
---
# Amazing!
๐ฅณ Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using ๐ค Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner ๐ค! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
Davlan/xlm-roberta-base-finetuned-xhosa
|
Davlan
| 2023-06-25T07:14:21Z | 171 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"fill-mask",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
---
|
Davlan/byt5-base-eng-yor-mt
|
Davlan
| 2023-06-25T07:13:35Z | 147 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"arxiv:2103.08647",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# byt5-base-eng-yor-mt
## Model description
**byt5-base-eng-yor-mt** is a **machine translation** model from English language to Yorรนbรก language based on a fine-tuned byt5-base model. It establishes a **strong baseline** for automatically translating texts from English to Yorรนbรก.
Specifically, this model is a *byt5-base* model that was fine-tuned on JW300 Yorรนbรก corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning byt5-base achieves **12.23 BLEU** on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 9.82
### BibTeX entry and citation info
By David Adelani
```
```
|
masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0
|
masakhane
| 2023-06-25T07:13:23Z | 325 | 8 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"token-classification",
"am",
"bm",
"obj",
"ee",
"fon",
"ha",
"ig",
"rw",
"lg",
"luo",
"mos",
"ny",
"pcm",
"sn",
"sw",
"tn",
"tw",
"wo",
"xh",
"yo",
"zu",
"multilingual",
"dataset:masakhaner2",
"arxiv:2103.11811",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-12-15T12:52:06Z |
---
license: afl-3.0
language:
- am
- bm
- obj
- ee
- fon
- ha
- ig
- rw
- lg
- luo
- mos
- ny
- pcm
- sn
- sw
- tn
- tw
- wo
- xh
- yo
- zu
- multilingual
datasets:
- masakhaner2
---
# masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0
## Model description
**masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0** is a **Named Entity Recognition (NER)** model for 21 African languages. Specifically, this model is a *Davlan/afro-xlmr-large* model that was fine-tuned on an aggregation of African language datasets obtained from two versions of MasakhaNER dataset i.e. [MasakhaNER 1.0](https://huggingface.co/datasets/masakhaner) and [MasakhaNER 2.0](https://huggingface.co/datasets/masakhane/masakhaner2). The languages covered are:
- Amharic (Amharic)
- Bambara (bam)
- Ghomala (bbj)
- Ewe (ewe)
- Fon (fon)
- Hausa (hau)
- Igbo (ibo)
- Kinyarwanda (kin)
- Luganda (lug)
- Dholuo (luo)
-Mossi (mos)
- Chichewa (nya)
- Nigerian Pidgin
- chShona (sna)
- Kiswahili (swฤ
)
- Setswana (tsn)
- Twi (twi)
- Wolof (wol)
- isiXhosa (xho)
- Yorรนbรก (yor)
- isiZulu (zul)
It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organization (ORG), and person (PER).
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0")
model = AutoModelForTokenClassification.from_pretrained("masakhane/afroxlmr-large-ner-masakhaner-1.0_2.0")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria"
ner_results = nlp(example)
print(ner_results)
```
## Eval results on MasakhaNER (F-score)
Model evaluated on MasakhaNER 1.0 and MasakhaNER 2.0 test sets
language| MasakhaNER 1.0 | MasakhaNER 2.0
-|-|-
amh |80.5|
bam || 83.1
bbj || 76.6
ewe || 89.6
fon || 83.8
hau |90.3| 87.5
ibo |89.5| 93.5
kin |82.0| 87.6
lug |87.1| 89.7
luo |80.8| 82.5
mos || 75.5
nya || 92.7
pcm |91.1| 90.9
sna || 96.5
swa |88.5| 93.4
tsn || 90.3
twi || 81.3
wol |72.7| 87.3
xho || 90.0
yor |88.1| 90.5
zul || 91.3
avg |**85.1**| **87.7**
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on the aggregation of [MasakhaNER 1.0](https://huggingface.co/datasets/masakhaner) and [MasakhaNER 2.0](https://huggingface.co/datasets/masakhane/masakhaner2) datasets
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a personโs name right after another personโs name
I-PER |Personโs name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
## Training procedure
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original MasakhaNER paper](https://arxiv.org/abs/2103.11811) which trained & evaluated the model on MasakhaNER corpus.
### BibTeX entry and citation info
```
@article{Adelani2022MasakhaNER2A,
title={MasakhaNER 2.0: Africa-centric Transfer Learning for Named Entity Recognition},
author={David Ifeoluwa Adelani and Graham Neubig and Sebastian Ruder and Shruti Rijhwani and Michael Beukman and Chester Palen-Michel and Constantine Lignos and Jesujoba Oluwadara Alabi and Shamsuddeen Hassan Muhammad and Peter Nabende and Cheikh M. Bamba Dione and Andiswa Bukula and Rooweither Mabuya and Bonaventure F. P. Dossou and Blessing K. Sibanda and Happy Buzaaba and Jonathan Mukiibi and Godson Kalipe and Derguene Mbaye and Amelia Taylor and Fatoumata Kabore and Chris C. Emezue and Anuoluwapo Aremu and Perez Ogayo and Catherine W. Gitau and Edwin Munkoh-Buabeng and Victoire Memdjokam Koagne and Allahsera Auguste Tapo and Tebogo Macucwa and Vukosi Marivate and Elvis Mboning and Tajuddeen R. Gwadabe and Tosin P. Adewumi and Orevaoghene Ahia and Joyce Nakatumba-Nabende and Neo L. Mokono and Ignatius M Ezeani and Chiamaka Ijeoma Chukwuneke and Mofetoluwa Adeyemi and Gilles Hacheme and Idris Abdulmumin and Odunayo Ogundepo and Oreen Yousuf and Tatiana Moteu Ngoli and Dietrich Klakow},
journal={ArXiv},
year={2022},
volume={abs/2210.12391}
}
```
|
NasimB/gpt2-dp-mod-aochild-10chars
|
NasimB
| 2023-06-25T06:53:44Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T03:14:38Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-dp-mod-aochild-10chars
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-dp-mod-aochild-10chars
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4173
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7077 | 0.27 | 500 | 5.6423 |
| 5.3468 | 0.54 | 1000 | 5.2154 |
| 5.0042 | 0.8 | 1500 | 4.9608 |
| 4.7637 | 1.07 | 2000 | 4.7969 |
| 4.5583 | 1.34 | 2500 | 4.6931 |
| 4.4721 | 1.61 | 3000 | 4.5939 |
| 4.3855 | 1.88 | 3500 | 4.5049 |
| 4.218 | 2.15 | 4000 | 4.4679 |
| 4.1202 | 2.41 | 4500 | 4.4175 |
| 4.105 | 2.68 | 5000 | 4.3697 |
| 4.0733 | 2.95 | 5500 | 4.3257 |
| 3.8601 | 3.22 | 6000 | 4.3344 |
| 3.8504 | 3.49 | 6500 | 4.3033 |
| 3.8507 | 3.76 | 7000 | 4.2759 |
| 3.8215 | 4.02 | 7500 | 4.2709 |
| 3.5828 | 4.29 | 8000 | 4.2887 |
| 3.6183 | 4.56 | 8500 | 4.2711 |
| 3.6264 | 4.83 | 9000 | 4.2489 |
| 3.5136 | 5.1 | 9500 | 4.2794 |
| 3.3547 | 5.36 | 10000 | 4.2895 |
| 3.383 | 5.63 | 10500 | 4.2727 |
| 3.3982 | 5.9 | 11000 | 4.2594 |
| 3.2002 | 6.17 | 11500 | 4.3133 |
| 3.1199 | 6.44 | 12000 | 4.3184 |
| 3.1483 | 6.71 | 12500 | 4.3123 |
| 3.1516 | 6.97 | 13000 | 4.3013 |
| 2.9083 | 7.24 | 13500 | 4.3587 |
| 2.9076 | 7.51 | 14000 | 4.3641 |
| 2.9176 | 7.78 | 14500 | 4.3616 |
| 2.8855 | 8.05 | 15000 | 4.3806 |
| 2.7292 | 8.32 | 15500 | 4.3978 |
| 2.7443 | 8.58 | 16000 | 4.4023 |
| 2.7445 | 8.85 | 16500 | 4.4046 |
| 2.702 | 9.12 | 17000 | 4.4125 |
| 2.6515 | 9.39 | 17500 | 4.4159 |
| 2.6552 | 9.66 | 18000 | 4.4170 |
| 2.6529 | 9.92 | 18500 | 4.4173 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
teoha/openai-whisper-medium-PeftType.LORA-colab
|
teoha
| 2023-06-25T06:51:18Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-25T06:51:11Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
zanafi/sentiment_model
|
zanafi
| 2023-06-25T06:31:04Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:indonlu",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-23T06:53:10Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- indonlu
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: sentiment_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: indonlu
type: indonlu
config: emot
split: validation
args: emot
metrics:
- name: Accuracy
type: accuracy
value: 0.7363636363636363
- name: Precision
type: precision
value: 0.7397155596092384
- name: Recall
type: recall
value: 0.7459489407651173
- name: F1
type: f1
value: 0.741920437379511
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_model
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the indonlu dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7788
- Accuracy: 0.7364
- Precision: 0.7397
- Recall: 0.7459
- F1: 0.7419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.1939 | 1.0 | 221 | 0.8261 | 0.6932 | 0.7203 | 0.7034 | 0.7056 |
| 0.6866 | 2.0 | 442 | 0.7925 | 0.725 | 0.7378 | 0.7377 | 0.7346 |
| 0.4791 | 3.0 | 663 | 0.7788 | 0.7364 | 0.7397 | 0.7459 | 0.7419 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sukantan/all-mpnet-base-v2-ftlegal-v3
|
sukantan
| 2023-06-25T06:20:52Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"dataset:sukantan/nyaya-st-training",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-25T06:20:46Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
datasets:
- sukantan/nyaya-st-training
---
# sukantan/all-mpnet-base-v2-ftlegal-v3
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sukantan/all-mpnet-base-v2-ftlegal-v3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sukantan/all-mpnet-base-v2-ftlegal-v3)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 391 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MegaBatchMarginLoss.MegaBatchMarginLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 391,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Raizel123/pamelasafitrilora
|
Raizel123
| 2023-06-25T04:37:40Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-25T04:34:50Z |
---
license: creativeml-openrail-m
---
|
ardhies/CuteAsianFace
|
ardhies
| 2023-06-25T04:10:18Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-25T04:06:53Z |
---
license: creativeml-openrail-m
---
|
Laurie/baichuan-7b-qlora-moss
|
Laurie
| 2023-06-25T04:06:12Z | 5 | 0 |
peft
|
[
"peft",
"text-generation",
"zh",
"en",
"dataset:fnlp/moss-003-sft-data",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-06-25T03:38:18Z |
---
library_name: peft
license: apache-2.0
datasets:
- fnlp/moss-003-sft-data
language:
- zh
- en
pipeline_tag: text-generation
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
## ไฝฟ็จๆนๆณ
git clone https://huggingface.co/Laurie/baichuan-7b-qlora-moss
cd baichuan-7b-qlora-moss
python src/web_demo.py \
--model_name_or_path baichuan-inc/baichuan-7B \
--checkpoint_dir .
|
razaali/swin-tiny-patch4-window7-224-finetuned-eurosat
|
razaali
| 2023-06-25T04:00:02Z | 211 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-25T03:25:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.977037037037037
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0662
- Accuracy: 0.9770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2501 | 1.0 | 190 | 0.1077 | 0.9626 |
| 0.1375 | 2.0 | 380 | 0.0892 | 0.9707 |
| 0.1324 | 3.0 | 570 | 0.0662 | 0.9770 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
CJacobnriia/spatnzRVC
|
CJacobnriia
| 2023-06-25T03:56:17Z | 0 | 0 | null |
[
"en",
"region:us"
] | null | 2023-06-25T01:52:32Z |
---
language:
- en
---
This is an RVC model of spatnz (https://www.youtube.com/channel/UCcNPbOeFo-qM0wpis8Lwdig)

|
FutureMiracle/CGEC-BART-Model
|
FutureMiracle
| 2023-06-25T03:55:55Z | 0 | 3 |
fairseq
|
[
"fairseq",
"BART",
"pytorch",
"CGEC",
"translation",
"zh",
"license:apache-2.0",
"region:us"
] |
translation
| 2023-06-21T08:28:46Z |
---
license: apache-2.0
language:
- zh
library_name: fairseq
tags:
- BART
- pytorch
- CGEC
metrics:
- bleu
pipeline_tag: translation
---
# ไธญๆ่ฏญๆณ็บ ้ไปปๅกไป็ป
Task:ไธญๆ่ฏญๆณ็บ ้ไปปๅก(Chinese Grammatical Error Correction,CGEC)
CGECไปปๅก่พๅ
ฅไธๅฅไธญๆๆๆฌ๏ผๆๆฌ็บ ้ๆๆฏๅฏนๅฅๅญไธญๅญๅจๆผๅใ่ฏญๆณใ่ฏญไน็ญ้่ฏฏ่ฟ่ก่ชๅจ็บ ๆญฃ๏ผ่พๅบ็บ ๆญฃๅ็ๆๆฌใ
# ไธญๆ่ฏญๆณ็บ ้ๆนๆณ
ไธปๆต็ๆนๆณไธบseq2seqๅseq2edits๏ผๅธธ็จ็ไธญๆ็บ ้ๆฐๆฎ้ๅ
ๆฌLang8ใNLPCC18ๅCGED็ญใ
# ๆจกๅๆ่ฟฐ
ๆไปฌ้็จๅบไบtransformer็seq2seqๆนๆณๅปบๆจกๆๆฌ็บ ้ไปปๅกใๆจกๅ้ๆฉไธ๏ผๆไปฌไฝฟ็จไธญๆBARTไฝไธบ้ข่ฎญ็ปๆจกๅ๏ผ็ถๅๅจLang8ๅCGED่ฎญ็ปๆฐๆฎไธ่ฟ่กfinetuneใ
ๅจไธๅผๅ
ฅ้ขๅค่ตๆบ็ๆ
ๅตไธ๏ผๆฌๆจกๅๅจLANG8ๆต่ฏ้ไธ่พพๅฐไบSOTAใ
# ๆจกๅ่ฎญ็ป
ๆจกๅ่ฎญ็ปๆฏๅบไบfairseqๅบ่ฟ่ก่ฎญ็ป็ใ
# ๅฆไฝไฝฟ็จ
step1: ไธ่ฝฝfairseqๅบ๏ผๅนถ่ฟ่กๅฎ่ฃ
step2: ไฝฟ็จinteractive.pyๆนๆณ่ฟ่กๆจ็
python -u ${FAIRSEQ_DIR}/interactive.py $PROCESSED_DIR \
--task syntax-enhanced-translation \
--path ${MODEL_PATH} \
--beam ${BEAM} \
--nbest ${N_BEST} \
-s src \
-t tgt \
--buffer-size 1000 \
--batch-size 32 \
--num-workers 12 \
--log-format tqdm \
--remove-bpe \
--fp16 \
--output_file $OUTPUT_DIR/output.nbest \
<$OUTPUT_DIR/lang8_test.char
|
blackmount8/open-llama-7b-open-instruct-ct2-float16
|
blackmount8
| 2023-06-25T03:49:04Z | 9 | 0 |
transformers
|
[
"transformers",
"text-generation",
"en",
"dataset:VMware/open-instruct-v1-oasst-dolly-hhrlhf",
"license:cc",
"region:us"
] |
text-generation
| 2023-06-24T15:05:27Z |
---
inference: false
license: cc
datasets:
- VMware/open-instruct-v1-oasst-dolly-hhrlhf
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# blackmount8/open-llama-7B-open-instruct-ct2-float16
Float16 version of [VMware/open-llama-7b-open-instruct](https://huggingface.co/VMware/open-llama-7b-open-instruct), quantized using CTranslate2.
## VMware/open-llama-7B-open-instruct
Instruction-tuned version of the fully trained Open LLama 7B model. The model is open for `<b>`COMMERCIAL USE `</b>`. `<br>`
`<b>` NOTE `</b>` : The model was trained using the Alpaca prompt template
`<b>` NOTE `</b>` : Fast tokenizer results in incorrect encoding, set the ``use_fast = False`` parameter, when instantiating the tokenizer
## License
- `<b>`Commercially Viable `</b>`
- Instruction dataset, [VMware/open-instruct-v1-oasst-dolly-hhrlhf](https://huggingface.co/datasets/VMware/open-instruct-v1-oasst-dolly-hhrlhf) is under cc-by-sa-3.0
- Language Model, ([openlm-research/open_llama_7b](https://huggingface.co/openlm-research/open_llama_7b)) is under apache-2.0
## Nomenclature
- Model : Open-llama
- Model Size: 7B parameters
- Dataset: Open-instruct-v1 (oasst, dolly, hhrlhf)
## Use in CTranslate2
```
import ctranslate2
from transformers import AutoTokenizer
model_name = "blackmount8/open-llama-7b-open-instruct-ct2-float16"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False, padding_side="left", truncation_side="left")
model = ctranslate2.Generator(model_name, device="auto", compute_type="float16")
input_text = ["What is the meaning of stonehenge?", "Hello mate!"]
input_ids = tokenizer(input_text, return_tensors="pt", padding=True, truncation=True).input_ids
input_tokens = [tokenizer.convert_ids_to_tokens(ele) for ele in input_ids]
outputs = model.generate_batch(input_tokens, max_length=128)
output_tokens = [
ele.sequences_ids[0] for ele in outputs
]
output = tokenizer.batch_decode(output_tokens)
print(output)
```
|
blackmount8/open-llama-13b-open-instruct-ct2-float16
|
blackmount8
| 2023-06-25T03:48:21Z | 4 | 0 |
transformers
|
[
"transformers",
"text-generation",
"en",
"dataset:VMware/open-instruct-v1-oasst-dolly-hhrlhf",
"license:cc",
"region:us"
] |
text-generation
| 2023-06-24T16:44:56Z |
---
inference: false
license: cc
datasets:
- VMware/open-instruct-v1-oasst-dolly-hhrlhf
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# blackmount8/open-llama-13B-open-instruct-ct2-float16
Float16 version of [VMware/open-llama-13b-open-instruct](https://huggingface.co/VMware/open-llama-13b-open-instruct), quantized using CTranslate2.
## VMware/open-llama-13B-open-instruct
Instruction-tuned version of the fully trained Open LLama 13B model. The model is open for `<b>`COMMERCIAL USE `</b>`. `<br>`
`<b>` NOTE `</b>` : The model was trained using the Alpaca prompt template
`<b>` NOTE `</b>` : Fast tokenizer results in incorrect encoding, set the ``use_fast = False`` parameter, when instantiating the tokenizer
## License
- `<b>`Commercially Viable `</b>`
- Instruction dataset, [VMware/open-instruct-v1-oasst-dolly-hhrlhf](https://huggingface.co/datasets/VMware/open-instruct-v1-oasst-dolly-hhrlhf) is under cc-by-sa-3.0
- Language Model, ([openlm-research/open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b)) is under apache-2.0
## Nomenclature
- Model : Open-llama
- Model Size: 13B parameters
- Dataset: Open-instruct-v1 (oasst, dolly, hhrlhf)
## Use in CTranslate2
```
import ctranslate2
from transformers import AutoTokenizer
model_name = "blackmount8/open-llama-13b-open-instruct-ct2-float16"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False, padding_side="left", truncation_side="left")
model = ctranslate2.Generator(model_name, device="auto", compute_type="float16")
input_text = ["What is the meaning of stonehenge?", "Hello mate!"]
input_ids = tokenizer(input_text, return_tensors="pt", padding=True, truncation=True).input_ids
input_tokens = [tokenizer.convert_ids_to_tokens(ele) for ele in input_ids]
outputs = model.generate_batch(input_tokens, max_length=128)
output_tokens = [
ele.sequences_ids[0] for ele in outputs
]
output = tokenizer.batch_decode(output_tokens)
print(output)
```
|
duyhngoc/Wave2Vec2_OV_Vie
|
duyhngoc
| 2023-06-25T03:47:48Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"vivos",
"generated_from_trainer",
"dataset:vivos",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-21T10:58:36Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- vivos
- generated_from_trainer
datasets:
- vivos
metrics:
- wer
model-index:
- name: Wave2Vec2_OV_Vie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wave2Vec2_OV_Vie
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the VIVOS - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5894
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| No log | 0.27 | 100 | 3.9210 | 1.0 |
| No log | 0.55 | 200 | 3.4375 | 1.0 |
| No log | 0.82 | 300 | 3.4356 | 1.0 |
| No log | 1.1 | 400 | 3.4045 | 1.0 |
| 4.1866 | 1.37 | 500 | 3.4694 | 1.0 |
| 4.1866 | 1.65 | 600 | 3.6266 | 1.0 |
| 4.1866 | 1.92 | 700 | 3.5694 | 1.0 |
| 4.1866 | 2.19 | 800 | 3.5733 | 1.0 |
| 4.1866 | 2.47 | 900 | 3.6381 | 1.0 |
| 3.4376 | 2.74 | 1000 | 3.6604 | 1.0 |
| 3.4376 | 3.02 | 1100 | 3.5868 | 1.0 |
| 3.4376 | 3.29 | 1200 | 3.4988 | 1.0 |
| 3.4376 | 3.57 | 1300 | 3.5409 | 1.0 |
| 3.4376 | 3.84 | 1400 | 3.4883 | 1.0 |
| 3.4365 | 4.12 | 1500 | 3.6125 | 1.0 |
| 3.4365 | 4.39 | 1600 | 3.6123 | 1.0 |
| 3.4365 | 4.66 | 1700 | 3.5978 | 1.0 |
| 3.4365 | 4.94 | 1800 | 3.5693 | 1.0 |
| 3.4365 | 5.21 | 1900 | 3.5659 | 1.0 |
| 3.4339 | 5.49 | 2000 | 3.6234 | 1.0 |
| 3.4339 | 5.76 | 2100 | 3.5997 | 1.0 |
| 3.4339 | 6.04 | 2200 | 3.6529 | 1.0 |
| 3.4339 | 6.31 | 2300 | 3.5780 | 1.0 |
| 3.4339 | 6.58 | 2400 | 3.5844 | 1.0 |
| 3.4333 | 6.86 | 2500 | 3.5792 | 1.0 |
| 3.4333 | 7.13 | 2600 | 3.5468 | 1.0 |
| 3.4333 | 7.41 | 2700 | 3.5691 | 1.0 |
| 3.4333 | 7.68 | 2800 | 3.5408 | 1.0 |
| 3.4333 | 7.96 | 2900 | 3.5482 | 1.0 |
| 3.4294 | 8.23 | 3000 | 3.6070 | 1.0 |
| 3.4294 | 8.5 | 3100 | 3.5905 | 1.0 |
| 3.4294 | 8.78 | 3200 | 3.6018 | 1.0 |
| 3.4294 | 9.05 | 3300 | 3.6326 | 1.0 |
| 3.4294 | 9.33 | 3400 | 3.6214 | 1.0 |
| 3.4293 | 9.6 | 3500 | 3.6372 | 1.0 |
| 3.4293 | 9.88 | 3600 | 3.6215 | 1.0 |
| 3.4293 | 10.15 | 3700 | 3.5106 | 1.0 |
| 3.4293 | 10.43 | 3800 | 3.5066 | 1.0 |
| 3.4293 | 10.7 | 3900 | 3.5352 | 1.0 |
| 3.4295 | 10.97 | 4000 | 3.5129 | 1.0 |
| 3.4295 | 11.25 | 4100 | 3.6384 | 1.0 |
| 3.4295 | 11.52 | 4200 | 3.6019 | 1.0 |
| 3.4295 | 11.8 | 4300 | 3.5876 | 1.0 |
| 3.4295 | 12.07 | 4400 | 3.6207 | 1.0 |
| 3.4252 | 12.35 | 4500 | 3.5998 | 1.0 |
| 3.4252 | 12.62 | 4600 | 3.6216 | 1.0 |
| 3.4252 | 12.89 | 4700 | 3.6073 | 1.0 |
| 3.4252 | 13.17 | 4800 | 3.5567 | 1.0 |
| 3.4252 | 13.44 | 4900 | 3.5745 | 1.0 |
| 3.4274 | 13.72 | 5000 | 3.5738 | 1.0 |
| 3.4274 | 13.99 | 5100 | 3.5914 | 1.0 |
| 3.4274 | 14.27 | 5200 | 3.6004 | 1.0 |
| 3.4274 | 14.54 | 5300 | 3.5968 | 1.0 |
| 3.4274 | 14.81 | 5400 | 3.5908 | 1.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
tyavika/pytorch
|
tyavika
| 2023-06-25T03:32:35Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-25T03:00:51Z |
---
tags:
- generated_from_trainer
model-index:
- name: pytorch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pytorch
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cpu
- Datasets 2.12.0
- Tokenizers 0.13.2
|
jiuzhan/YoLoV7-dog
|
jiuzhan
| 2023-06-25T03:27:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-25T03:11:07Z |
# ็็็ง็ฑป่ฏๅซ
็จไบ500ๅผ ๅพ็็ๆฐๆฎ้๏ผๆ็นๅฐใ
ๅไบ38ไธช็ง็ฑป๏ผๆฏ็งๅผ ๆฐๆ็นๅฐใ
ๆๆไธ่ฌใ
|
MazVer/queenbee
|
MazVer
| 2023-06-25T02:55:12Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-25T02:52:19Z |
---
license: creativeml-openrail-m
---
|
gaiamolinaro/ppo-SnowballTarget
|
gaiamolinaro
| 2023-06-25T02:36:18Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-25T02:36:11Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: gaiamolinaro/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
klaylouis1932/xlm-roberta-base-finetuned-panx-de
|
klaylouis1932
| 2023-06-25T02:22:45Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-24T07:47:55Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8653353814644136
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1339
- F1: 0.8653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2583 | 1.0 | 525 | 0.1596 | 0.8231 |
| 0.1262 | 2.0 | 1050 | 0.1395 | 0.8468 |
| 0.0824 | 3.0 | 1575 | 0.1339 | 0.8653 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
nbiish/learning-taxi-v3
|
nbiish
| 2023-06-25T02:14:07Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T02:14:05Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: learning-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="nbiish/learning-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mort666/faster-whisper-large-v2-th
|
mort666
| 2023-06-25T02:13:04Z | 644 | 8 |
ctranslate2
|
[
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2023-06-18T15:28:40Z |
---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper large-v2 (Thai Finetune) model for CTranslate2
This repository contains the conversion of the [biodatlab/whisper-th-large-combined](https://huggingface.co/biodatlab/whisper-th-large-combined) which is finetune of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) for the Thai language to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("large-v2")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model biodatlab/whisper-th-large-combined --output_dir faster-whisper-large-v2-th \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/biodatlab/whisper-th-large-combined).**
|
cczhong/chinese-alpaca-plus-lora-7b-merged-ggml-4b
|
cczhong
| 2023-06-25T01:30:50Z | 0 | 3 | null |
[
"region:us"
] | null | 2023-06-24T17:53:57Z |
need ggml-v3 to run. llama-cpp-python > 0.1.57
from https://github.com/ymcui/Chinese-LLaMA-Alpaca
|
yashgharat/HFTaxi-v3
|
yashgharat
| 2023-06-25T01:18:43Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T01:18:41Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: HFTaxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="yashgharat/HFTaxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NasimB/gpt2-dp-mod_aochild
|
NasimB
| 2023-06-25T00:27:44Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-24T20:59:20Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-dp-mod_aochild
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-dp-mod_aochild
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4146
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.706 | 0.27 | 500 | 5.6466 |
| 5.3616 | 0.54 | 1000 | 5.2058 |
| 5.0148 | 0.81 | 1500 | 4.9571 |
| 4.7595 | 1.08 | 2000 | 4.8100 |
| 4.5716 | 1.35 | 2500 | 4.6947 |
| 4.4792 | 1.62 | 3000 | 4.5951 |
| 4.3985 | 1.89 | 3500 | 4.5126 |
| 4.2203 | 2.16 | 4000 | 4.4747 |
| 4.1373 | 2.42 | 4500 | 4.4206 |
| 4.1109 | 2.69 | 5000 | 4.3695 |
| 4.0827 | 2.96 | 5500 | 4.3285 |
| 3.8662 | 3.23 | 6000 | 4.3409 |
| 3.863 | 3.5 | 6500 | 4.3058 |
| 3.8585 | 3.77 | 7000 | 4.2777 |
| 3.8073 | 4.04 | 7500 | 4.2766 |
| 3.594 | 4.31 | 8000 | 4.2886 |
| 3.6275 | 4.58 | 8500 | 4.2700 |
| 3.6373 | 4.85 | 9000 | 4.2436 |
| 3.488 | 5.12 | 9500 | 4.2800 |
| 3.3669 | 5.39 | 10000 | 4.2884 |
| 3.3981 | 5.66 | 10500 | 4.2764 |
| 3.3991 | 5.93 | 11000 | 4.2533 |
| 3.177 | 6.2 | 11500 | 4.3110 |
| 3.1321 | 6.47 | 12000 | 4.3137 |
| 3.1491 | 6.73 | 12500 | 4.3083 |
| 3.1544 | 7.0 | 13000 | 4.3112 |
| 2.8924 | 7.27 | 13500 | 4.3587 |
| 2.9109 | 7.54 | 14000 | 4.3634 |
| 2.9185 | 7.81 | 14500 | 4.3600 |
| 2.8619 | 8.08 | 15000 | 4.3819 |
| 2.7347 | 8.35 | 15500 | 4.3980 |
| 2.7435 | 8.62 | 16000 | 4.4007 |
| 2.752 | 8.89 | 16500 | 4.4012 |
| 2.6887 | 9.16 | 17000 | 4.4116 |
| 2.6506 | 9.43 | 17500 | 4.4137 |
| 2.6588 | 9.7 | 18000 | 4.4144 |
| 2.66 | 9.97 | 18500 | 4.4146 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
alantao912/models
|
alantao912
| 2023-06-25T00:07:35Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"blip",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-06-24T20:19:09Z |
---
license: bsd-3-clause
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: models
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# models
This model is a fine-tuned version of [Salesforce/blip-image-captioning-base](https://huggingface.co/Salesforce/blip-image-captioning-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4107
- Wer Score: 0.5495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Score |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| 9.4536 | 0.05 | 10 | 7.8217 | 41.7753 |
| 7.3267 | 0.11 | 20 | 6.6585 | 0.7753 |
| 6.2358 | 0.16 | 30 | 5.7758 | 0.5667 |
| 5.2862 | 0.22 | 40 | 4.7628 | 0.5419 |
| 4.3786 | 0.27 | 50 | 3.9203 | 0.6398 |
| 3.5554 | 0.33 | 60 | 3.1482 | 0.5613 |
| 2.849 | 0.38 | 70 | 2.5209 | 0.5548 |
| 2.3041 | 0.44 | 80 | 2.0561 | 0.5645 |
| 1.8999 | 0.49 | 90 | 1.7474 | 0.5645 |
| 1.658 | 0.55 | 100 | 1.5722 | 0.5548 |
| 1.5238 | 0.6 | 110 | 1.4836 | 0.5591 |
| 1.4726 | 0.66 | 120 | 1.4461 | 0.5538 |
| 1.4328 | 0.71 | 130 | 1.4285 | 0.5473 |
| 1.4211 | 0.77 | 140 | 1.4205 | 0.5559 |
| 1.4202 | 0.82 | 150 | 1.4156 | 0.5548 |
| 1.4098 | 0.88 | 160 | 1.4129 | 0.5505 |
| 1.4124 | 0.93 | 170 | 1.4113 | 0.5548 |
| 1.4075 | 0.99 | 180 | 1.4107 | 0.5495 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.13.1
- Tokenizers 0.13.3
|
benbav97/ppo-Huggy
|
benbav97
| 2023-06-24T23:46:50Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-24T22:49:23Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: benbav97/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
mohameddhiab/rate-jokes-bert
|
mohameddhiab
| 2023-06-24T23:45:08Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-24T23:21:00Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: rate-jokes-bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rate-jokes-bert
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0871
- F1: 0.0444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 64 | 2.4209 | 0.0028 |
| No log | 2.0 | 128 | 2.3785 | 0.0130 |
| No log | 3.0 | 192 | 2.3215 | 0.0729 |
| No log | 4.0 | 256 | 2.1787 | 0.0444 |
| No log | 5.0 | 320 | 2.1038 | 0.0444 |
| No log | 6.0 | 384 | 2.0944 | 0.0444 |
| No log | 7.0 | 448 | 2.0911 | 0.0444 |
| 2.2915 | 8.0 | 512 | 2.0901 | 0.0444 |
| 2.2915 | 9.0 | 576 | 2.0892 | 0.0444 |
| 2.2915 | 10.0 | 640 | 2.0871 | 0.0444 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
97jmlr/pyramids
|
97jmlr
| 2023-06-24T23:32:30Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-24T23:32:23Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: 97jmlr/pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
Monk666/my_awesome_eli5_clm-model
|
Monk666
| 2023-06-24T23:28:23Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-24T23:19:53Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Monk666/my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Monk666/my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.7288
- Validation Loss: 3.7309
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.9096 | 3.7608 | 0 |
| 3.7906 | 3.7412 | 1 |
| 3.7288 | 3.7309 | 2 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.11.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
MindNetML/dqn-SpaceInvadersNoFrameskip-v4
|
MindNetML
| 2023-06-24T23:09:09Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-24T23:08:32Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 572.50 +/- 179.80
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MindNetML -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MindNetML -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga MindNetML
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 3),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
mohalm/videomae-base-finetuned-ucf101-subset
|
mohalm
| 2023-06-24T23:03:09Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-06-24T20:21:51Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0008
- eval_accuracy: 1.0
- eval_runtime: 223.6754
- eval_samples_per_second: 0.443
- eval_steps_per_second: 0.076
- epoch: 1.01
- step: 43
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 164
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
TheBloke/orca_mini_7B-GGML
|
TheBloke
| 2023-06-24T22:49:41Z | 0 | 21 |
transformers
|
[
"transformers",
"en",
"dataset:psmathur/alpaca_orca",
"dataset:psmathur/dolly-v2_orca",
"dataset:psmathur/WizardLM_Orca",
"arxiv:2306.02707",
"license:mit",
"region:us"
] | null | 2023-06-24T22:07:15Z |
---
inference: false
license: mit
language:
- en
library_name: transformers
datasets:
- psmathur/alpaca_orca
- psmathur/dolly-v2_orca
- psmathur/WizardLM_Orca
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Pankaj Mathur's Orca Mini 7B GGML
These files are GGML format model files for [Pankaj Mathur's Orca Mini 7B](https://huggingface.co/psmathur/orca_mini_7b).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/orca_mini_7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/orca_mini_7B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/psmathur/orca_mini_7b)
## Prompt template:
```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
prompt
### Response:
```
or
```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
prompt
### Input:
input
### Response:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| orca-mini-7b.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB | 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| orca-mini-7b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB | 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| orca-mini-7b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB | 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| orca-mini-7b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB | 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| orca-mini-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB | 6.29 GB | Original llama.cpp quant method, 4-bit. |
| orca-mini-7b.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB | 6.71 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| orca-mini-7b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB | 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| orca-mini-7b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB | 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| orca-mini-7b.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB | 7.13 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| orca-mini-7b.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB | 7.56 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| orca-mini-7b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB | 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| orca-mini-7b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB | 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| orca-mini-7b.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| orca-mini-7b.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB | 9.66 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m orca-mini-7b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\nYou are an story writing assistant who writes very long, detailed and interesting stories\n\n### User:\nWrite a story about llamas\n\n### Input:\n{input}\n\n### Response:\n"
```
If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Pankaj Mathur's Orca Mini 7B
# orca_mini_7b
An [OpenLLaMa-7B model](https://github.com/openlm-research/open_llama) model trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches.
# Dataset
We build explain tuned [WizardLM dataset ~70K](https://github.com/nlpxucan/WizardLM), [Alpaca dataset ~52K](https://crfm.stanford.edu/2023/03/13/alpaca.html) & [Dolly-V2 dataset ~15K](https://github.com/databrickslabs/dolly) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707).
We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets.
This helps student model aka this model to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version).
Please see below example usage how the **System** prompt is added before each **instruction**.
# Training
The training configurations are provided in the table below.
The training takes on 8x A100(80G) GPUs and lasts for around 7 Hours for cost of $84 using [Lambda Labs](https://lambdalabs.com)
We used DeepSpeed with fully sharded data parallelism, also know as [ZeRO stage 3](https://engineering.fb.com/2021/07/15/open-source/fsdp/) by writing our own fine tunning scripts plus leveraging some of the model training code provided by amazing [OpenAlpaca repo](https://github.com/yxuansu/OpenAlpaca)
Here are some of params used during training:
|||
|:-------------:|:-------------:|
|*batch_size*|32|
|*train_micro_batch_size_per_gpu*|2|
|*gradient_accumulation_steps*|2|
|*Learning rate*|2e-5|
|*Max length*|1024|
|*Epochs*|3|
|*Optimizer*|AdamW|
# Example Usage
Below shows an example on how to use this model
```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer
# Hugging Face model_path
model_path = 'psmathur/orca_mini_7b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
#generate text function
def generate_text(system, instruction, input=None):
if input:
prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
else:
prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Response:\n"
tokens = tokenizer.encode(prompt)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to('cuda')
instance = {'input_ids': tokens,'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024, 'top_k': 50}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length+instance['generate_len'],
use_cache=True,
do_sample=True,
top_p=instance['top_p'],
temperature=instance['temperature'],
top_k=instance['top_k']
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f'[!] Response: {string}'
# Sample Test Instruction Used by Youtuber Sam Witteveen https://www.youtube.com/@samwitteveenai
system = 'You are an AI assistant that follows instruction extremely well. Help as much as you can.'
instruction = 'Write a letter to Sam Altman, CEO of OpenAI, requesting him to convert GPT4 a private model by OpenAI to an open source project'
print(generate_text(system, instruction))
```
```
[!] Response:
Dear Sam Altman,
I am writing to request that you convert the GPT4 private model developed by OpenAI to an open source project. As a user of OpenAI, I have been waiting for the day when I can use the advanced natural language processing capabilities of GPT4 in a more open and accessible way.
While OpenAI has made significant progress in developing AI applications, it has primarily focused on building private models that are not accessible to the general public. However, with the recent release of GPT-3, there is a growing demand for more open and accessible AI tools.
Converting GPT4 to an open source project would allow for greater transparency, collaboration, and innovation. It would also help to build trust in the technology and ensure that it is used ethically and responsibly.
I urge you to consider converting GPT4 to an open source project. This would be a significant contribution to the AI community and would help to create a more open and accessible future.
Thank you for your consideration.
Sincerely,
[Your Name]
```
**P.S. I am #opentowork and #collaboration, if you can help, please reach out to me at psmathur.public@gmail.com**
Next Goals:
1) Try more data like actually using FLAN-v2, just like Orka Research Paper (I am open for suggestions)
2) Provide more options for Text generation UI. (may be https://github.com/oobabooga/text-generation-webui)
3) Provide 4bit GGML/GPTQ quantized model (may be [TheBloke](https://huggingface.co/TheBloke) can help here)
Limitations & Biases:
This model can produce factually incorrect output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Disclaimer:
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model.
Please cosult an attorney before using this model for commercial purposes.
Citiation:
If you found wizardlm_alpaca_dolly_orca_open_llama_7b useful in your research or applications, please kindly cite using the following BibTeX:
```
@misc{wizardlm_alpaca_dolly_orca_open_llama_7b,
author = {Pankaj Mathur},
title = {wizardlm_alpaca_dolly_orca_open_llama_7b: An explain tuned OpenLLaMA-7b model on custom wizardlm, alpaca, & dolly datasets},
year = {2023},
publisher = {GitHub, HuggingFace},
journal = {GitHub repository, HuggingFace repository},
howpublished = {\url{https://github.com/pankajarm/wizardlm_alpaca_dolly_orca_open_llama_7b}, \url{https://https://huggingface.co/psmathur/wizardlm_alpaca_dolly_orca_open_llama_7b}},
}
```
```
@software{openlm2023openllama,
author = {Xinyang Geng and Hao Liu},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@misc{openalpaca,
author = {Yixuan Su and Tian Lan and Deng Cai},
title = {OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/yxuansu/OpenAlpaca}},
}
```
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
|
97jmlr/ppo-SnowballTarget
|
97jmlr
| 2023-06-24T22:43:03Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-24T22:42:57Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: 97jmlr/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
digiplay/illustro1stEdition_illustroV1
|
digiplay
| 2023-06-24T22:33:19Z | 384 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-24T20:27:59Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/96101/illustro-1st-edition-photoreal-fantasy-model
Original Author's DEMO image :
)_intricate,%20detailed,%20knight,%20fantasy,%20(engraved%20filigree%20armor),%20perfect%20lighting,%20perfe.jpeg)
image detail link : https://civitai.com/images/1266239
|
EIStakovskii/french_toxicity_classifier_plus
|
EIStakovskii
| 2023-06-24T22:31:55Z | 108 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"camembert",
"text-classification",
"fr",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-21T07:48:14Z |
---
language: fr # <-- my language
widget:
- text: "J'aime ta coiffure"
example_title: "NOT TOXIC 1"
- text: "Va te faire foutre"
example_title: "TOXIC 1"
- text: "Quel mauvais temps, n'est-ce pas ?"
example_title: "NOT TOXIC 2"
- text: "J'espรจre que tu vas mourir, connard !"
example_title: "TOXIC 2"
- text: "j'aime beaucoup ta veste"
example_title: "NOT TOXIC 3"
license: other
---
This model was trained for toxicity labeling. Label_1 means TOXIC, Label_0 means NOT TOXIC
The model was fine-tuned based off [the CamemBERT language model](https://huggingface.co/camembert-base).
The accuracy is 93% on the test split during training and 79% on a manually picked (and thus harder) sample of 200 sentences (100 label 1, 100 label 0) at the end of the training.
The model was finetuned on 32k sentences. The train data was the translations of the English data (around 30k sentences) from [the multilingual_detox dataset](https://github.com/s-nlp/multilingual_detox) by [Skolkovo Institute](https://huggingface.co/SkolkovoInstitute) using [the opus-mt-en-fr translation model](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) by [Helsinki-NLP](https://huggingface.co/Helsinki-NLP) and the data from [the jigsaw dataset](https://www.kaggle.com/competitions/jigsaw-multilingual-toxic-comment-classification/data) on kaggle.
|
dar-tau/ppo-LunarLander-v2
|
dar-tau
| 2023-06-24T22:12:59Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-24T22:06:50Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.44 +/- 18.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
pongjin/en_with_korean_model_large_960h
|
pongjin
| 2023-06-24T21:56:41Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:pongjin/en_corpora_parliament_processed",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-16T16:55:59Z |
---
license: apache-2.0
datasets:
- pongjin/en_corpora_parliament_processed
language:
- en
pipeline_tag: automatic-speech-recognition
metrics:
- wer
---
**This model has been referred to the following links**
1) https://huggingface.co/blog/wav2vec2-with-ngram
2) https://huggingface.co/blog/fine-tune-wav2vec2-english
Thanks to [patrickvonplaten Patrick von Platen](https://huggingface.co/patrickvonplaten)
ํด๋น ๋ชจ๋ธ์ ํ๊ตญ์ธ์ ์์ด ๋ฐํ ์ธ์ ์ฑ๋ฅ ๊ฐ์ ์ ์ํด facebook/wav2vec2-large-960h ๋ก ํ์ธํ๋ํ ๋ชจ๋ธ์ KenLM 5-gram ์ ๋ถ์ธ ASR + LM ๋ชจ๋ธ์
๋๋ค.
If you want to use LM, you must have kenlm installed https://github.com/kpu/kenlm
```python
pip install https://github.com/kpu/kenlm/archive/master.zip
```
ํ์ต ๋ฐ์ดํฐ ์ถ์ฒ : https://aiopen.etri.re.kr/voiceModel
>transformers==4.24.0
>huggingface_hub==0.13.2
| wer | epoch | batch | lr | weight_decay| warmup_steps|
| --- | --- | --- | --- | --- | --- |
| 0.17 | 10 | 16 | 1e-4 | 0.005 | 1000 |
|
SandeepKanao/HL7-FHIR-Model-V1
|
SandeepKanao
| 2023-06-24T21:45:41Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-22T12:44:31Z |
---
license: apache-2.0
language:
- en
tags:
- Token Classification
co2_eq_emissions: 0.0279399890043426
widget:
- text: ""MSH|^~&|SendingAPP|MYTEST|||20230621090000||ORU^R01|1|P|2.5.1||||||UNICODE
PID|1||13579246^^^TEST||Taylor^Michael||19830520|M|||987 Pine St^^Anytown^NY^23456||555-456-7890
PV1|1||bc^^004
OBR|1||13579246|BCD^LEFT Breast Cancer Diagnosis^99MRC||20230621090000|||Taylor^Sarah||20230620090000|||N
OBX|1|ST|FINDINGS^Findings^99MRC||Lab report shows asymmetric density in the right breast.|F|||R
OBX|2|ST|IMPRESSION^Impression^99MRC||BIRADS category: 4 - Probably left side as issues.|F|||R
OBX|3|ST|RECOMMENDATION^Recommendation^99MRC||Follow-up specialit visit in six months.|F|||R""
example_title: "example 1"
- text: "MSH|^~&|SendingAPP|MYTEST|||20230621090000||ORU^R01|1|P|2.5.1||||||UNICODE
PID|1||13579246^^^TEST||Taylor^Michael||19830520|M|||987 Pine St^^Anytown^NY^23456||555-456-7890
PV1|1||bc^^004
OBR|1||13579246|BCD^LEFT Breast Cancer Diagnosis^99MRC||20230621090000|||Taylor^Sarah||20230620090000|||N
OBX|1|ST|FINDINGS^Findings^99MRC||Lab report shows asymmetric density in the right breast.|F|||R
OBX|2|ST|IMPRESSION^Impression^99MRC||BIRADS category: 4 - Probably left side as issues.|F|||R
OBX|3|ST|RECOMMENDATION^Recommendation^99MRC||Follow-up specialit visit in six months.|F|||R"
## About the Model
An English Named Entity Recognition model, trained on Maccrobat to recognize the bio-medical entities (107 entities) from a given text corpus (case reports etc.). This model was built on top of distilbert-base-uncased
- Dataset: Maccrobat https://figshare.com/articles/dataset/MACCROBAT2018/9764942
- Carbon emission: 0.0279399890043426 Kg
- Training time: 30.16527 minutes
- GPU used : 1 x GeForce RTX 3060 Laptop GPU
Checkout the tutorial video for explanation of this model and corresponding python library: https://youtu.be/xpiDPdBpS18
## Usage
The easiest way is to load the inference api from huggingface and second method is through the pipeline object offered by transformers library.
```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("d4data/biomedical-ner-all")
model = AutoModelForTokenClassification.from_pretrained("d4data/biomedical-ner-all")
pipe = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple") # pass device=0 if using gpu
pipe("""The patient reported no recurrence of palpitations at follow-up 6 months after the ablation.""")
```
## Author
|
digiplay/ChillyMix_v1
|
digiplay
| 2023-06-24T21:19:54Z | 291 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-23T16:14:13Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/58772?modelVersionId=63220
Original Author's DEMO image :

image detailed link: https://civitai.com/images/701538
|
conrevo/Segment-Anything-A1111
|
conrevo
| 2023-06-24T21:18:24Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-06-16T20:01:12Z |
This repository contains
- The extracted plugin model from https://huggingface.co/shi-labs/Matting-Anything
- The ultralytics model from https://github.com/CASIA-IVA-Lab/FastSAM
These models should only be accompanied with https://github.com/continue-revolution/sd-webui-segment-anything
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.