modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-29 06:27:22
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
525 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-29 06:27:10
card
stringlengths
11
1.01M
den2nova/FlexDreamHK
den2nova
2023-07-29T04:21:29Z
150
17
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "ja", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-06T10:11:45Z
--- license: creativeml-openrail-m language: - ja library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion --- # <u>🎈 FlexDreamHK</u> <img src="https://huggingface.co/den2nova/FlexDreamHK/resolve/main/sample/banner2.png" width="100%" height="100%">  <b>FlexDreamHK</b>は<b style="color:#dc143c;">リークされたNovelAIモデルの入っていない、あるいはそのリスクを可能な限り低くしたモデルを目指して作成</b>しました。<br><br>  モデル名はマージに使用したモデルたちに敬意を表し、主要なモデル名を組み合わせて命名しています。<br><br>  マージ元となったモデルはStable DiffusionやWifu Diffusionへの追加学習(ファインチューニング)を行ったもののみで構成されています。<br>  また、にじジャーニーと普段使いしているモデルから生成した絵から絵柄LoRAを作成・マージしており、いわゆる<b style="color:#4753a2;">蒸留系と呼ばれるモデル</b>でもあります。<br><br>  マージの過程と使用したLoRAそのもの、またそれを作成した際のデータセットを開示する事で可能な限り透明性を担保しました。 ----------------------------- # 🎀 特徴 <ul> <li style="color:red;font-weight:bold;">得意なこと</li> <ul> <li>太めの主線でパッキリとしたアニメ調のイラスト</li> <li>soloのかわいい女の子が出しやすい</li> <li>ある程度のNSFWへの対応</li> <li>ある程度の呪文の効きやすさ</li> <li>キャラクターイラストに特化した絵の生成</li> </ul> <li style="color:blue;font-weight:bold;">苦手なこと</li> <ul> <li>濃いめの影が落ちやすい</li> <li>複数人が登場する絵は出しづらい</li> <li>男性の絵は苦手</li> <li>danbooru以外のタグは効き目が薄い(色指定などが顕著)</li> <li>表情の多様性にやや欠ける</li> <li>背景メインのイラストは苦手</li> <li>手指含めて細部の破綻が比較的多め</li> </ul> </ul> ----------------------------- ## 👉 推奨設定 <ul> <li>clip skip:2 / VAE不要</li> <li>顔が溶ける場合は拡張機能<a href="https://github.com/Bing-su/adetailer">adetailer</a>の使用がオススメです</li> <li>推奨ネガティブ(nsfw, extra fingers, deformed hands, polydactyl:1.3), (worst quality, low quality, poor quality, bad quality:1.35)</li> </ul> ----------------------------- ## 履歴 <table> <tr> <th>日付</th> <th>内容</th> <tr> <td>2023/07/29</td> <td>FexDreamHK_v2.0 サンプル画像アップ</td> </tr> <tr> <td>2023/07/28</td> <td>FexDreamHK_v2.0 公開</td> </tr> <tr> <td>2023/07/07</td> <td>FexDreamHK_v1.0 公開</td> </tr> </table> ----------------------------- ## ⭕ ライセンス / License <b>creativeml-openrail-m</b> <div class="px-2"> <table class="table-fixed border mt-0 text-xs"> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> 著作者表記を入れずにモデルを使用する<br> Use the model without crediting the creator </td> </tr> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> このモデルで生成した画像を商用利用する<br> Sell images they generate </td> </tr> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> 商用画像生成サービスに、このモデルを使用する<br> Run on services that generate images for money </td> </tr> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> このモデルを使用したマージモデルを共有・配布する<br> Share merges using this model </td> </tr> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> このモデル、または派生モデルを販売する<br> Sell this model or merges using this model </td> </tr> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> このモデルをマージしたモデルに異なる権限を設定する<br> Have different permissions when sharing merges </td> </tr> </table> </div> ----------------------------- # ver2.0 ## 🍳 レシピ / Recipe <div class="px-2"> <div class="border p-2"> <details> <table> <thead> <tr> <th>A</th> <th>B</th> <th>C</th> <th>weight</th> <th>OUTPUT</th> </tr> <tr> <td>FlexDreamHK_v1.0</td> <td><a href="https://huggingface.co/sazyou-roukaku/LittleStepMix">LittleStepMix_A</a></td> <td></td> <td>Weight sum cosineA 0.5</td> <td>FlexDreamHK_2.0_orig</td> </tr> <tr> <td>FlexDreamHK_2.0_orig</td> <td></td> <td></td> <td>adjust 0,0,0,0,1,1,2</td> <td>FlexDreamHK_v2.0</td> </tr> </tbody> </table> </details> </div> </div> ----------------------------- <details> <summary>🎨 サンプル</summary> <img src="https://huggingface.co/den2nova/FlexDreamHK/resolve/main/sample/ver20_grid-0000.png" width="100%" height="100%"> <pre style="white-space: pre-line;" class="w-full"> 1girl, solo, from above, blonde hair, short ponytail hair, amber eyes, small breasts, armored dress, outdoors, fantasy castle, nervous, nice hands Negative prompt: (nsfw, extra fingers, deformed hands, polydactyl:1.3), (worst quality, low quality, poor quality, bad quality:1.35), demon horns Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 3921621133, Size: 512x512, Model hash: e2c364c195, Model: FlexDreamHK_v2.0, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2, Hires steps: 10, Hires upscaler: lollypop, Version: v1.3.1 </pre> <img src="https://huggingface.co/den2nova/FlexDreamHK/resolve/main/sample/ver20_grid-0001.png" width="100%" height="100%"> <pre style="white-space: pre-line;" class="w-full"> <a href="https://twitter.com/Emanon_14/status/1684944352161026049">エマノンさんから呪文お借りしてます</a> 1girl, smile, sitting, poncho, frills, gothic, snowflakes, winter, campfire, polar bear Negative prompt: (nsfw, extra fingers, deformed hands, polydactyl:1.3), (worst quality, low quality, poor quality, bad quality:1.35) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 3452924181, Size: 512x512, Model hash: e2c364c195, Model: FlexDreamHK_v2.0, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2, Hires steps: 10, Hires upscaler: lollypop, Version: v1.3.1 </pre> <img src="https://huggingface.co/den2nova/FlexDreamHK/resolve/main/sample/ver20_grid-0002.png" width="100%" height="100%"> <pre style="white-space: pre-line;" class="w-full"> 1girl, solo, flower, japanese clothes, hair ornament, long hair, hair flower, kimono, smile, looking at viewer, white flower, floral print, red flower, very long hair, jewelry, earrings, hakama, bangs, closed mouth, blue eyes, braid, hakama skirt, skirt, blush, long sleeves, red hakama Negative prompt: (nsfw, extra fingers, deformed hands, polydactyl:1.3), (worst quality, low quality, poor quality, bad quality:1.35) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 4251802516, Size: 512x512, Model hash: e2c364c195, Model: FlexDreamHK_v2.0, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2, Hires steps: 10, Hires upscaler: lollypop, Version: v1.3.1 </pre> <img src="https://huggingface.co/den2nova/FlexDreamHK/resolve/main/sample/ver20_grid-0003.png" width="100%" height="100%"> <pre style="white-space: pre-line;" class="w-full"> multiple girls, 2girls, cat, blue hair girl and pink hair girl, long hair, ahoge, school uniform Negative prompt: (nsfw, extra fingers, deformed hands, polydactyl:1.3), (worst quality, low quality, poor quality, bad quality:1.35) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 434535967, Size: 512x512, Model hash: e2c364c195, Model: FlexDreamHK_v2.0, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2, Hires steps: 10, Hires upscaler: lollypop, Version: v1.3.1 </pre> </details> ----------------------------- # ver1.0 <img src="https://huggingface.co/den2nova/FlexDreamHK/resolve/main/sample/banner.jpg" width="100%" height="100%"> ## 🍳 レシピ / Recipe <div class="px-2"> <div class="border p-2"> <details> <table> <thead> <tr> <th>A</th> <th>B</th> <th>C</th> <th>weight</th> <th>OUTPUT</th> </tr> <tr> <td><a href="https://civitai.com/models/25694/epicrealism">epicrealism_pureEvolutionV3</a></td> <td><a href="https://civitai.com/models/4384?modelVersionId=94081">dreamshaper_631BakedVae</a></td> <td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5">v1-5-pruned-emaonly</a></td> <td>Add difference 0.5</td> <td>epicdreamv5</td> </tr> <tr> <td><a href="https://huggingface.co/Ai-tensa/FlexWaifu">FlexWaifuRainbow</a></td> <td><a href="https://civitai.com/models/82813?modelVersionId=106905">sdhk_v40</a></td> <td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5">v1-5-pruned-emaonly</a></td> <td>Add difference 0.5</td> <td>FlexHKrainbow</td> </tr> <tr> <td>FlexHKrainbow</td> <td>epicdreamv5</td> <td></td> <td>COSAIN</td> <td>FlexHK_Rainbowe_epicdream</td> </tr> <tr> <td>FlexHK_Rainbowe_epicdream</td> <td colspan="3">LoRA <a href="https://huggingface.co/datasets/den2nova/den2niji">den2niji</a>:0.5:KAO,<a href="https://huggingface.co/datasets/den2nova/den2SD">den2SD</a>:0.5:KAO<br>※各LoRAはにじジャーニーと普段使いしてるモデルからの絵柄LoRA SDHKv3.0で学習(データセットとLoRA本体共にリンク先で公開中)<br>※KAOのweight:0,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0</td> <td>FlexHK_beta1</td> </tr> <tr> <td><a href="https://civitai.com/models/82813?modelVersionId=89247">sdhk_v30</a></td> <td><a href="https://civitai.com/models/4384?modelVersionId=94081">dreamshaper_631BakedVae</a></td> <td></td> <td>0,1,0.842423804012346,0.71508487654321,0.615234375,0.540123456790123,<br> 0.487003279320988,0.453125,0.435739776234568,0.432098765432099,0.439453125,<br> 0.455054012345679,0.476152584876543,0.5,0.523847415123457,0.544945987654321,0.560546875,<br> 0.2,0.2,0,0.2,0.459876543209876,0.384765625,0.28491512345679,0.157576195987653,0</td> <td>230627_1</td> </tr> <tr> <td>230627_1</td> <td colspan="3">LoRA <a harf="https://huggingface.co/datasets/den2nova/den2niji">den2niji</a>:0.8:KAO,<a href="https://huggingface.co/datasets/den2nova/den2SD">den2SD</a>:0.8:KAO</td> <td>230627_1_LoRA</td> </tr> <tr> <td>230627_1_LoRA</td> <td colspan="3">LoRA den2SD-41:0.3:KAO</td> <td>230627_1_LoRA2</td> </tr> <tr> <td>230627_1_LoRA2</td> <td colspan="3">LoRA <a href="https://civitai.com/models/102188/sdhkv4qu">SDHKv4_QU</a>:2</td> <td>230627_1_LoRA_QU2.0</td> </tr> <tr> <td>FlexHK_beta1</td> <td>230627_1_LoRA_QU2.0</td> <td></td> <td>FAKE_CUBIC_HERMITE</td> <td>FlexHK_beta2</td> </tr> <tr> <td>FlexHK_beta2</td> <td><a href="https://huggingface.co/hakurei/waifu-diffusion-v1-3">wd-v1-3-float16</a></td> <td><a href="https://huggingface.co/runwayml/stable-diffusion-v1-5">v1-5-pruned-emaonly</a></td> <td>Add difference 0.25</td> <td>FlexDreamHK_v1</td> </tr> </tbody> </table> </details> </div> </div> ----------------------------- <details> <summary>🎨 サンプル</summary> <img src="https://huggingface.co/den2nova/FlexDreamHK/resolve/main/sample/grid-0000.png" width="100%" height="100%"> <pre style="white-space: pre-line;" class="w-full"> 1girl, solo, framed, silver hair, dreadlocks, indigo eyes, huge breasts, china gothic lolita style dress, hand on own chin, sweet, flowers, Bellflower, frozen lakeside , light smile, nice hands, standing Negative prompt: (nsfw, extra fingers, deformed hands, polydactyl:1.3), (worst quality, low quality, poor quality, bad quality:1.35) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 1658825243, Size: 512x512, Model hash: 7ab6f37bb0, Model: FlexDreamHK_v1, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: Latent, Version: v1.4.0 </pre> <img src="https://huggingface.co/den2nova/FlexDreamHK/resolve/main/sample/grid-0001.png" width="100%" height="100%"> <pre style="white-space: pre-line;" class="w-full"> 1girl, solo, (wide shot, fisheye:0.85), pink hair, twintails, orange eyes, small breasts, cheerleader pom pom, hand on own knee, rose, instrument, Poinsettia, bedroom , desperation, nice hands, squatting Negative prompt: (nsfw, extra fingers, deformed hands, polydactyl:1.3), (worst quality, low quality, poor quality, bad quality:1.35) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 2578613301, Size: 512x512, Model hash: 7ab6f37bb0, Model: FlexDreamHK_v1, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: Latent, Version: v1.4.0 </pre> <img src="https://huggingface.co/den2nova/FlexDreamHK/resolve/main/sample/grid-0002.png" width="100%" height="100%"> <pre style="white-space: pre-line;" class="w-full"> 1girl, solo, from above, red hair, bowl cut, light brown eyes, small breasts, astronaut suit, hand on own head, feeling of floating, space station , surprised, nice hands, flying Negative prompt: (nsfw, extra fingers, deformed hands, polydactyl:1.3), (worst quality, low quality, poor quality, bad quality:1.35) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 2288316915, Size: 512x512, Model hash: 7ab6f37bb0, Model: FlexDreamHK_v1, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: Latent, Version: v1.4.0 </pre> <img src="https://huggingface.co/den2nova/FlexDreamHK/resolve/main/sample/grid-0003.png" width="100%" height="100%"> <pre style="white-space: pre-line;" class="w-full"> 1girl, solo, album cover, green hair, ponytail hair, dark green eyes, huge breasts, school uniform, arm up, door, prism, building , happy, nice hands, standing, petals, cherry blossoms Negative prompt: (nsfw, extra fingers, deformed hands, polydactyl:1.3), (worst quality, low quality, poor quality, bad quality:1.35) Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 1151456510, Size: 512x512, Model hash: 7ab6f37bb0, Model: FlexDreamHK_v1, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2.5, Hires steps: 10, Hires upscaler: Latent, Version: v1.4.0 </pre> </details> -----------------------------  モデルの作成に際し、NAIリークフリーマージモデル研究会を大いに活用させて頂きました。<br>  意欲の持続やアイデアの閃きがあった他、モデル作成に後押しをして下さった方やモデル情報を共有してくださった皆さんに感謝申し上げます。
EXrRor3/ppo-SnowballTarget
EXrRor3
2023-07-29T04:20:04Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-07-29T04:20:02Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: EXrRor3/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
rzambrano/Pixelcopter-PLE-v0
rzambrano
2023-07-29T04:07:02Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-29T01:06:53Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 42.70 +/- 24.83 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Nebulon/MBWMD
Nebulon
2023-07-29T03:50:40Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-28T07:30:48Z
--- license: creativeml-openrail-m ---
xiaol/wizard-rwkv-world-7B-ctx32k
xiaol
2023-07-29T03:44:51Z
0
5
null
[ "license:apache-2.0", "region:us" ]
null
2023-07-29T02:25:04Z
--- license: apache-2.0 --- This model is finetuned based on RWKV4 world 7B english model, using wizard datset to fit in 32k ctx. should do some complex instruction and can feed in 32k token, but as we know wizard dataset only have max 4K length per sample , i'll do more test on long prompt and generation use https://github.com/josStorer/RWKV-Runner to run this model. ![QQ截图20230729111155.png](https://cdn-uploads.huggingface.co/production/uploads/6176b32847ee6431f632981e/OCapn4mFYvXAXU1PfIwf8.png) ![image.png](https://cdn-uploads.huggingface.co/production/uploads/6176b32847ee6431f632981e/mScVGl8mhETw8qq45JeoM.png)
petrznel/blurred_landmarks
petrznel
2023-07-29T03:43:17Z
248
1
transformers
[ "transformers", "pytorch", "resnet", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-29T02:37:23Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: blurred_landmarks results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: landmarks split: validation args: landmarks metrics: - name: Accuracy type: accuracy value: 0.9645365168539326 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # blurred_landmarks This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1152 - Accuracy: 0.9645 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6588 | 1.0 | 357 | 0.6460 | 0.7707 | | 0.3752 | 2.0 | 714 | 0.2969 | 0.8933 | | 0.3275 | 3.0 | 1071 | 0.1912 | 0.9319 | | 0.2183 | 4.0 | 1429 | 0.1794 | 0.9305 | | 0.2133 | 5.0 | 1786 | 0.1638 | 0.9414 | | 0.1984 | 6.0 | 2143 | 0.1322 | 0.9484 | | 0.1409 | 7.0 | 2500 | 0.1304 | 0.9529 | | 0.1864 | 8.0 | 2858 | 0.1212 | 0.9572 | | 0.1778 | 9.0 | 3215 | 0.1216 | 0.9540 | | 0.1734 | 10.0 | 3572 | 0.1129 | 0.9593 | | 0.1349 | 11.0 | 3929 | 0.1127 | 0.9614 | | 0.1057 | 12.0 | 4287 | 0.1177 | 0.9582 | | 0.1434 | 13.0 | 4644 | 0.1153 | 0.9603 | | 0.0832 | 14.0 | 5001 | 0.1264 | 0.9593 | | 0.0963 | 15.0 | 5358 | 0.1146 | 0.9607 | | 0.0642 | 16.0 | 5716 | 0.1135 | 0.9635 | | 0.0763 | 17.0 | 6073 | 0.1210 | 0.9614 | | 0.0432 | 18.0 | 6430 | 0.1162 | 0.9645 | | 0.0618 | 19.0 | 6787 | 0.1269 | 0.9600 | | 0.049 | 19.99 | 7140 | 0.1152 | 0.9645 | ### Framework versions - Transformers 4.30.0.dev0 - Pytorch 1.13.0 - Datasets 2.10.1 - Tokenizers 0.11.0
chriskim2273/IOTNation_CompanyName_AND_Location_AND_Series_Extraction_QA_Model_1.6_DistilBert
chriskim2273
2023-07-29T03:43:06Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-29T03:03:45Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: IOTNation_CompanyName_AND_Location_AND_Series_Extraction_QA_Model_1.6_DistilBert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IOTNation_CompanyName_AND_Location_AND_Series_Extraction_QA_Model_1.6_DistilBert This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6294 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.1 - Tokenizers 0.13.3
marccloudera/llma-finetuned-7b2
marccloudera
2023-07-29T03:27:10Z
0
0
null
[ "generated_from_trainer", "base_model:daryl149/llama-2-7b-chat-hf", "base_model:finetune:daryl149/llama-2-7b-chat-hf", "region:us" ]
null
2023-07-28T23:04:58Z
--- base_model: daryl149/llama-2-7b-chat-hf tags: - generated_from_trainer model-index: - name: llma-finetuned-7b2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llma-finetuned-7b2 This model is a fine-tuned version of [daryl149/llama-2-7b-chat-hf](https://huggingface.co/daryl149/llama-2-7b-chat-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.10.1 - Tokenizers 0.13.3
hoang14/test_qlora_1
hoang14
2023-07-29T03:08:05Z
2
0
peft
[ "peft", "region:us" ]
null
2023-07-29T03:00:09Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
lmsys/longchat-13b-16k
lmsys
2023-07-29T02:59:51Z
12,834
131
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-06-28T05:33:42Z
--- inference: false --- # longchat-13b-16k Model Card ## Usage Please use load_model from FastChat or LongChat repo to load the model (or chatting API from FastChat). There is a monkey patch needed to use the model. Usage referece: (LongChat) python3 eval.py --model-name-or-path lmsys/longchat-13b-16k --task topics (FastChat) python3 -m fastchat.serve.cli --model-path lmsys/longchat-13b-16k Under the hood, the monkey patch is added in: https://github.com/lm-sys/FastChat/blob/da0641e567cf93756b0978ab5a6b092e96f06240/fastchat/model/model_adapter.py#L429 ## Model details **Model type:** longchat-13b-16k is an open-source chatbot trained by fine-tuning llama-13b on user-shared conversations collected from ShareGPT, using the condensing rotary embedding technique reported in the [blog](https://lmsys.org/blog/2023-06-29-longchat). **Model date:** longchat-13b-16k was trained on June 2023. **Organizations developing the model:** The LongChat developers: Dacheng Li*, Rulin Shao*, Anze Xie, Ying Sheng, Lianmin Zheng, Ion Stoica, Xuezhe Ma, and Hao Zhang **Paper or resources for more information:** https://github.com/DachengLi1/LongChat **Where to send questions or comments about the model:** https://github.com/DachengLi1/LongChat ## Intended use **Primary intended uses:** The primary use of longchat-13b-16k is for research purposes. **Primary intended users:** The primary intended users of the model are researchers in natural language processing, machine learning, and artificial intelligence. ## Training dataset 18K conversations collected from ShareGPT.com. ## Evaluation dataset A preliminary evaluation of the model quality is conducted by our released [LongEval](https://github.com/DachengLi1/LongChat).
nakcnx/wangchang-math
nakcnx
2023-07-29T02:55:31Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-29T02:55:27Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
MattBoraske/REINFORCE-PixelCopter
MattBoraske
2023-07-29T02:44:09Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-05-27T20:33:45Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: REINFORCE-PixelCopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 26.80 +/- 20.80 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Monte Carlo Reinforce** agent playing **Pixelcopter-PLE-v0** .
MattBoraske/ppo-HumanoidStandup-v2-init
MattBoraske
2023-07-29T02:42:14Z
0
0
stable-baselines3
[ "stable-baselines3", "HumanoidStandup-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-04-28T01:45:07Z
--- library_name: stable-baselines3 tags: - HumanoidStandup-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: HumanoidStandup-v2 type: HumanoidStandup-v2 metrics: - type: mean_reward value: 65822.31 +/- 10972.33 name: mean_reward verified: false --- # **PPO** Agent playing **HumanoidStandup-v2** This is a trained model of a **PPO** agent playing **HumanoidStandup-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
MattBoraske/ppo-CartPole-v1
MattBoraske
2023-07-29T02:41:49Z
1
0
stable-baselines3
[ "stable-baselines3", "CartPole-v1", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-04-27T18:53:03Z
--- library_name: stable-baselines3 tags: - CartPole-v1 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **PPO** Agent playing **CartPole-v1** This is a trained model of a **PPO** agent playing **CartPole-v1** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
NasimB/gutenberg_fixed-rarity-seed
NasimB
2023-07-29T02:41:00Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-28T22:48:29Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gutenberg_fixed-rarity-seed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gutenberg_fixed-rarity-seed This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.1027 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.3464 | 0.29 | 500 | 5.3385 | | 5.0325 | 0.58 | 1000 | 4.9275 | | 4.7015 | 0.87 | 1500 | 4.6813 | | 4.4439 | 1.16 | 2000 | 4.5384 | | 4.2877 | 1.46 | 2500 | 4.4265 | | 4.1914 | 1.75 | 3000 | 4.3221 | | 4.0739 | 2.04 | 3500 | 4.2465 | | 3.8852 | 2.33 | 4000 | 4.2009 | | 3.8675 | 2.62 | 4500 | 4.1497 | | 3.8215 | 2.91 | 5000 | 4.0969 | | 3.6454 | 3.2 | 5500 | 4.0917 | | 3.5847 | 3.49 | 6000 | 4.0672 | | 3.5639 | 3.79 | 6500 | 4.0323 | | 3.4775 | 4.08 | 7000 | 4.0319 | | 3.3099 | 4.37 | 7500 | 4.0274 | | 3.3121 | 4.66 | 8000 | 4.0128 | | 3.2957 | 4.95 | 8500 | 4.0009 | | 3.1585 | 5.24 | 9000 | 4.0134 | | 3.1319 | 5.53 | 9500 | 4.0121 | | 3.1272 | 5.82 | 10000 | 4.0113 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
MattBoraske/dqn-LunarLander-v2-10M
MattBoraske
2023-07-29T02:39:59Z
5
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "en", "license:apache-2.0", "model-index", "region:us" ]
reinforcement-learning
2023-04-28T00:01:18Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 84.43 +/- 73.70 name: mean_reward verified: false license: apache-2.0 language: - en --- # **DQN** Agent playing **LunarLander-v2** This is a trained model of a **DQN** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
TalesLF/rl_course_vizdoom_health_gathering_supreme
TalesLF
2023-07-29T02:19:08Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-29T02:18:51Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 10.59 +/- 5.25 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r TalesLF/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
machinelearningzuu/paper-summarization
machinelearningzuu
2023-07-29T01:16:54Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-13T14:31:12Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer model-index: - name: paper-summarization results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # paper-summarization This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3296 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.2336 | 1.0 | 78 | 2.5990 | | 2.7888 | 2.0 | 156 | 2.3754 | | 2.5667 | 3.0 | 234 | 2.3296 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.13.1 - Datasets 2.12.0 - Tokenizers 0.13.3
sensualg/llama2-qlora-guanaco-test
sensualg
2023-07-29T01:15:41Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-29T01:15:24Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
eikoenchine/taxi-Q-learning-off-policy
eikoenchine
2023-07-29T01:00:14Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "region:us" ]
reinforcement-learning
2023-07-29T00:34:54Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="eikoenchine/taxi-Q-learning-off-policy", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ToolBench/ToolBench_IR_bert_based_uncased
ToolBench
2023-07-29T00:55:19Z
278
18
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-07-27T02:29:11Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 15101 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 0, "evaluator": "api_evaluator.APIEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 500, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
vadhri/qcnn-hybrid
vadhri
2023-07-29T00:46:58Z
0
0
null
[ "image-classification", "en", "license:mit", "region:us" ]
image-classification
2023-07-29T00:34:26Z
--- license: mit language: - en tags: - image-classification --- Modified LeNet-5 multi class classification using hidden quantum layer (real amplitudes and bellman states circuit).
SJSui/fluidic-lander
SJSui
2023-07-29T00:23:30Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-29T00:22:00Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 252.51 +/- 15.68 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
rehanhaider/vectors-training-sdxl-1.0
rehanhaider
2023-07-29T00:17:49Z
0
3
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-07-28T14:00:31Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: in the style of wlat_mntn tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - rehanhaider/vectors-training-sdxl-1.0 These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on in the style of wlat_mntn using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False. Special VAE used for training: None.
aff1/pichanabooth
aff1
2023-07-28T23:44:02Z
30
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-28T23:42:13Z
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: pichanayoosuk --- ### pichanabooth Dreambooth model trained by aff1 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: pichanayoosuk (use that on your prompt) ![pichanayoosuk 0](https://huggingface.co/aff1/pichanabooth/resolve/main/concept_images/pichanayoosuk_%281%29.jpg)![pichanayoosuk 1](https://huggingface.co/aff1/pichanabooth/resolve/main/concept_images/pichanayoosuk_%282%29.jpg)![pichanayoosuk 2](https://huggingface.co/aff1/pichanabooth/resolve/main/concept_images/pichanayoosuk_%283%29.jpg)![pichanayoosuk 3](https://huggingface.co/aff1/pichanabooth/resolve/main/concept_images/pichanayoosuk_%284%29.jpg)![pichanayoosuk 4](https://huggingface.co/aff1/pichanabooth/resolve/main/concept_images/pichanayoosuk_%285%29.jpg)![pichanayoosuk 5](https://huggingface.co/aff1/pichanabooth/resolve/main/concept_images/pichanayoosuk_%286%29.jpg)![pichanayoosuk 6](https://huggingface.co/aff1/pichanabooth/resolve/main/concept_images/pichanayoosuk_%287%29.jpg)![pichanayoosuk 7](https://huggingface.co/aff1/pichanabooth/resolve/main/concept_images/pichanayoosuk_%288%29.jpg)![pichanayoosuk 8](https://huggingface.co/aff1/pichanabooth/resolve/main/concept_images/pichanayoosuk_%289%29.jpg)![pichanayoosuk 9](https://huggingface.co/aff1/pichanabooth/resolve/main/concept_images/pichanayoosuk_%2810%29.jpg)![pichanayoosuk 10](https://huggingface.co/aff1/pichanabooth/resolve/main/concept_images/pichanayoosuk_%2811%29.jpg)![pichanayoosuk 11](https://huggingface.co/aff1/pichanabooth/resolve/main/concept_images/pichanayoosuk_%2812%29.jpg)![pichanayoosuk 12](https://huggingface.co/aff1/pichanabooth/resolve/main/concept_images/pichanayoosuk_%2813%29.jpg)![pichanayoosuk 13](https://huggingface.co/aff1/pichanabooth/resolve/main/concept_images/pichanayoosuk_%2814%29.jpg)![pichanayoosuk 14](https://huggingface.co/aff1/pichanabooth/resolve/main/concept_images/pichanayoosuk_%2815%29.jpg)![pichanayoosuk 15](https://huggingface.co/aff1/pichanabooth/resolve/main/concept_images/pichanayoosuk_%2816%29.jpg)![pichanayoosuk 16](https://huggingface.co/aff1/pichanabooth/resolve/main/concept_images/pichanayoosuk_%2817%29.jpg)![pichanayoosuk 17](https://huggingface.co/aff1/pichanabooth/resolve/main/concept_images/pichanayoosuk_%2818%29.jpg)![pichanayoosuk 18](https://huggingface.co/aff1/pichanabooth/resolve/main/concept_images/pichanayoosuk_%2819%29.jpg)![pichanayoosuk 19](https://huggingface.co/aff1/pichanabooth/resolve/main/concept_images/pichanayoosuk_%2820%29.jpg)
Allenpai/AlpacaPred
Allenpai
2023-07-28T23:03:00Z
0
0
null
[ "region:us" ]
null
2023-07-22T21:38:59Z
Training procedure The following bitsandbytes quantization config was used during training: load_in_8bit: True load_in_4bit: False llm_int8_threshold: 6.0 llm_int8_skip_modules: None llm_int8_enable_fp32_cpu_offload: False llm_int8_has_fp16_weight: False bnb_4bit_quant_type: fp4 bnb_4bit_use_double_quant: False bnb_4bit_compute_dtype: float32 Framework versions PEFT 0.4.0.dev0
undrwolf/a2c-PandaReachDense-v2
undrwolf
2023-07-28T22:53:07Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-27T21:57:40Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -1.47 +/- 0.16 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
NasimB/children_stories-rarity-seed
NasimB
2023-07-28T22:46:49Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-28T18:59:42Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: children_stories-rarity-seed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # children_stories-rarity-seed This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.0996 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.3312 | 0.29 | 500 | 5.3354 | | 5.0371 | 0.58 | 1000 | 4.9186 | | 4.6965 | 0.87 | 1500 | 4.6883 | | 4.4447 | 1.16 | 2000 | 4.5459 | | 4.2947 | 1.46 | 2500 | 4.4323 | | 4.1916 | 1.75 | 3000 | 4.3253 | | 4.0754 | 2.04 | 3500 | 4.2524 | | 3.886 | 2.33 | 4000 | 4.2091 | | 3.8693 | 2.62 | 4500 | 4.1548 | | 3.8235 | 2.91 | 5000 | 4.1078 | | 3.6423 | 3.2 | 5500 | 4.0967 | | 3.5745 | 3.49 | 6000 | 4.0700 | | 3.5702 | 3.78 | 6500 | 4.0351 | | 3.489 | 4.07 | 7000 | 4.0307 | | 3.3135 | 4.37 | 7500 | 4.0300 | | 3.3057 | 4.66 | 8000 | 4.0129 | | 3.294 | 4.95 | 8500 | 4.0020 | | 3.151 | 5.24 | 9000 | 4.0121 | | 3.1315 | 5.53 | 9500 | 4.0117 | | 3.127 | 5.82 | 10000 | 4.0105 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
eikoenchine/q-FrozenLake-v1-4x4-noSlippery
eikoenchine
2023-07-28T22:31:59Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T22:31:55Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="eikoenchine/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
SaferChat/falcon-7b-peft-omnibot
SaferChat
2023-07-28T22:30:54Z
2
0
peft
[ "peft", "region:us" ]
null
2023-07-28T22:29:48Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
GraydientPlatformAPI/model_165
GraydientPlatformAPI
2023-07-28T22:24:12Z
44
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-28T22:20:07Z
--- library_name: diffusers pipeline_tag: text-to-image ---
NeoCodes-dev/a2c-PandaReachDense-v2
NeoCodes-dev
2023-07-28T22:21:53Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T22:19:07Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -1.53 +/- 0.35 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Henk717/airochronos-33B
Henk717
2023-07-28T22:11:05Z
1,408
6
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-10T09:46:47Z
--- license: other --- After the initial experiment with chronoboros-33B it was evident that the merge was to unpredictable to be useful, testing the individual models it became clear that the bias should be weighted towards Chronos. This is the new release of the merge with 75% chronos 33B, and 25% airoboros-1.4 33B. Model has been tested with the Alpaca prompting format combined with KoboldAI Lite's instruct and chat modes, as well as regular story writing. It has also been tested on basic reasoning tasks, but has not seen much testing for factual information.
Mods13/ExpMix
Mods13
2023-07-28T21:59:04Z
0
1
null
[ "en", "license:other", "region:us" ]
null
2023-04-18T13:35:32Z
--- license: other language: - en ---
cboettig/rl-minicourse
cboettig
2023-07-28T21:54:01Z
1
0
stable-baselines3
[ "stable-baselines3", "biology", "climate", "reinforcement-learning", "en", "license:bsd-2-clause", "region:us" ]
reinforcement-learning
2023-07-28T21:48:10Z
--- license: bsd-2-clause language: - en library_name: stable-baselines3 pipeline_tag: reinforcement-learning tags: - biology - climate --- ## An introduction to Reinforcement Learning for Conservation Applications
tricksiebzehn/frgttnmx_prnd
tricksiebzehn
2023-07-28T21:44:38Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-28T20:52:18Z
--- license: creativeml-openrail-m ---
sam-fsm/gpt2-on-squad
sam-fsm
2023-07-28T21:13:40Z
66
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-27T16:02:31Z
--- license: mit base_model: gpt2 tags: - generated_from_keras_callback model-index: - name: sam-fsm/gpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # sam-fsm/gpt2-finetuned-wikitext2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 6.1287 - Validation Loss: 5.6521 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 6.1287 | 5.6521 | 0 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.12.0 - Datasets 2.14.1 - Tokenizers 0.13.3
KoboldAI/LLAMA2-13B-Holodeck-1-GGML
KoboldAI
2023-07-28T21:07:59Z
0
11
null
[ "en", "license:other", "region:us" ]
null
2023-07-28T15:03:10Z
--- license: other language: en commercial: no inference: false --- # LLAMA2 13B - Holodeck ## Model Description LLAMA2 13B-Holodeck is a finetune created using Meta's llama 2 model, these are the GGML conversions compatible with [Koboldcpp](https://koboldai.org/cpp). Want to use this in transformers? Check out https://huggingface.co/KoboldAI/LLAMA2-13B-Holodeck-1 ## Training data The training data contains around 3000 ebooks in various genres. Most parts of the dataset have been prepended using the following text: `[Genre: <genre1>, <genre2>]` ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='KoboldAI/LLAMA2-13B-Holodeck-1') >>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50) [{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}] ``` ### Limitations and Biases Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion). ### License Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. **Extra clause:** You shall use the Materials and Products solely for research purposes or personal use and not for any commercial purpose. Nothing in the Community License shall be construed as granting you a license to use the Materials or Products for any other purpose. ### BibTeX entry and citation info https://huggingface.co/meta-llama/Llama-2-13b-hf
osot/ppo-LunarLander-v2
osot
2023-07-28T20:57:58Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T20:57:38Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 260.45 +/- 18.50 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
iamtarun/pycompetitive-codegen350M-qlora
iamtarun
2023-07-28T20:43:50Z
104
1
transformers
[ "transformers", "pytorch", "safetensors", "codegen", "text-generation", "code", "en", "dataset:iamtarun/code_contest_python3_alpaca", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-28T18:56:05Z
--- datasets: - iamtarun/code_contest_python3_alpaca language: - en metrics: - code_eval library_name: transformers pipeline_tag: text-generation tags: - code widget: - text: "def isprime(num):" example_title: "Code Example 1" - text: "def binary_search" example_title: "Code Example 2" - text: "def square(num):" example_title: "Code Example 3" --- # Competitive Programming LLM for Python Language This model is a finetuned version of [codegen350M-mono](https://huggingface.co/Salesforce/codegen-350M-mono) on cleaned coding competition [dataset](https://huggingface.co/datasets/iamtarun/code_contest_python3_alpaca) that uses alpaca style prompts while training. ## Prompt function ```python ''' This function generates prompts using the problem description, sample input, and output examples. @param1 description: str - text problem description @param2 inputs: list - list of sample input examples @param3 outputs: list - list of outputs corresponding to inputs also, len(inputs) == len(outputs) ''' def generate_prompt(description, inputs, outputs): text = ("Below is a problem description that describes the problem. Write code in Python that appropriately solves the problem.\n\n" "### Description:\n" f"{description}\n\n") assert len(inputs) == len(outputs) c = 1 for inp, out in zip(inputs, outputs): text += ("### Input:\n" f"{inp}\n" "### Output:\n" f"{out}\n\n") c += 1 if c > 2: break text += "### Code:\n" return text ``` ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer # load model and tokenizer model = AutoModelForCausalLM.from_pretrained("iamtarun/pycompetitive-codegen350M-qlora", device_map="auto") tokenizer = AutoTokenizer.from_pretrained("iamtarun/pycompetitive-codegen350M-qlora") # loading model for inference model.eval() # inference function ''' This function takes text prompt as input which is generated from the generate_prompt function and returns the generated response @param1 prompt: str - text prompt generated using generate_prompt function. ''' def pipe(prompt): device = "cuda" inputs = tokenizer(prompt, return_tensors="pt").to(device) with torch.no_grad(): output = model.generate(**inputs, max_length=512, do_sample=True, temperature=0.5, top_p=0.95, repetition_penalty=1.15) return tokenizer.decode(output[0].tolist(), skip_special_tokens=True, clean_up_tokenization_space=False) # generating code for a problem description description = "Mr. Chanek has an integer represented by a string s. Zero or more digits have been erased and are denoted by the character _. There are also zero or more digits marked by the character X, meaning they're the same digit. Mr. Chanek wants to count the number of possible integer s, where s is divisible by 25. Of course, s must not contain any leading zero. He can replace the character _ with any digit. He can also replace the character X with any digit, but it must be the same for every character X. As a note, a leading zero is any 0 digit that comes before the first nonzero digit in a number string in positional notation. For example, 0025 has two leading zeroes. An exception is the integer zero, (0 has no leading zero, but 0000 has three leading zeroes). Input One line containing the string s (1 ≤ |s| ≤ 8). The string s consists of the characters 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, _, and X. Output Output an integer denoting the number of possible integer s. Examples Input 25 Output 1 Input _00 Output 9 Input _XX Output 9 Input 0 Output 1 Input 0_25 Output 0 Note In the first example, the only possible s is 25. In the second and third example, s ∈ \{100, 200,300,400,500,600,700,800,900\}. In the fifth example, all possible s will have at least one leading zero." inputs = ["0\n", "_XX\n", "_00\n", "0_25\n"] outputs = ["1\n", "9\n", "9\n", "0\n"] prompt = generate_prompt(description, inputs, outputs) print(pipe(prompt)) print("\n", "="*100, "\n") ```
AlexWortega/FlanFred
AlexWortega
2023-07-28T20:33:26Z
8
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "ru", "en", "dataset:AlexWortega/flan_translated_300k", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-24T21:00:40Z
--- license: mit datasets: - AlexWortega/flan_translated_300k language: - ru - en pipeline_tag: text2text-generation library_name: transformers widget: - text: '<SC6>Человек: Почему трава зеленая?\nОтвет: <extra_id_0>' - text: '<SC1>Ты философ, любящий рассуждать. Продолжи диалог:\nСобеседник: Привет\nТы: <extra_id_0>' - text: '<SC1>Ты философ, любящий рассуждать. Продолжи диалог:\nСобеседник: В чем смысл жизни?\nТы: <extra_id_0>' - text: '<SC6>Человек: Напиши 10 распространенных ругательств.\nОтвет: <extra_id_0>' - text: '<SC1>Ты прикольная девушка Анфиса. Продолжи диалог\nСобеседник: Привет, тебя как звать?\nТы: <extra_id_0>' - text: '<SC1>Ты заботливая жена, говоришь со своим мужем. Продолжи диалог:\nСобеседник: Привет дорогая. Ты сделала ужин?\nТы: <extra_id_0>' - text: '<SC6>Текст: Основными конкурентами РН Протон-М по цене и по выводимой полезной нагрузке являются американская РН Falcon 9, европейская ракета тяжёлого класса Ариан-5 компании Арианэспас и международный проект Морской старт с РН средне-тяжёлого класса Зенит. Кроме того, конкурентами по массе полезной нагрузки, выводимой на орбиту, могут считаться американские носители Атлас-5 и Дельта-4, а также японский носитель H-IIB. Тем не менее стоимость последних трёх упомянутых РН значительно превышает стоимость РН Протон-М, и поэтому они фактически не конкурируют с Протоном на рынке коммерческих запусков[145].\nВопрос: Как называется Японский носитель?\nОтвет: <extra_id_0>' --- ``` import torch import transformers use_cuda = torch.cuda.is_available() device = torch.device("cuda" if use_cuda else "cpu") t5_tokenizer = transformers.GPT2Tokenizer.from_pretrained("AlexWortega/FlanFred") t5_model = transformers.T5ForConditionalGeneration.from_pretrained("AlexWortega/FlanFred") def generate_text(input_str, tokenizer, model, device, max_length=50): # encode the input string to model's input_ids input_ids = tokenizer.encode(input_str, return_tensors='pt').to(device) # generate text with torch.no_grad(): outputs = model.generate(input_ids=input_ids, max_length=max_length, num_return_sequences=1, temperature=0.7, do_sample=True) # decode the output and return the text return tokenizer.decode(outputs[0], skip_special_tokens=True) # usage: input_str = "Hello, how are you?" print(generate_text(input_str, t5_tokenizer, t5_model, device)) ``` # Metrics: ``` | Metric | flanfred | siberianfred | fred | | ------------- | ----- |------ |----- | | xnli_en | 0.51 |0.49 |0.041 | | xnli_ru | 0.71 |0.62 |0.55 | | xwinograd_ru | 0.66 |0.51 |0.54 | ``` # Citation ``` @MISC{AlexWortega/flan_translated_300k, author = {Pavel Ilin, Ksenia Zolian,Ilya kuleshov, Egor Kokush, Aleksandr Nikolich}, title = {Russian Flan translated}, url = {https://huggingface.co/datasets/AlexWortega/flan_translated_300k}, year = 2023 } ```
RohanKilledar/roberta-large-finetuned-music-version-3
RohanKilledar
2023-07-28T20:09:50Z
63
0
transformers
[ "transformers", "tf", "roberta", "fill-mask", "generated_from_keras_callback", "base_model:FacebookAI/roberta-large", "base_model:finetune:FacebookAI/roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-28T13:30:13Z
--- license: mit base_model: roberta-large tags: - generated_from_keras_callback model-index: - name: RohanKilledar/roberta-large-finetuned-music-version-3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # RohanKilledar/roberta-large-finetuned-music-version-3 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7669 - Validation Loss: 0.6018 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -895, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.7669 | 0.6018 | 0 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.13.0 - Datasets 2.13.1 - Tokenizers 0.13.3
Whyte283/Business
Whyte283
2023-07-28T19:59:10Z
0
1
asteroid
[ "asteroid", "text-classification", "aa", "dataset:Open-Orca/OpenOrca", "license:bigscience-openrail-m", "region:us" ]
text-classification
2023-07-28T19:57:51Z
--- license: bigscience-openrail-m datasets: - Open-Orca/OpenOrca language: - aa metrics: - accuracy library_name: asteroid pipeline_tag: text-classification ---
TalesLF/ppo-CartPole-v2
TalesLF
2023-07-28T19:50:45Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T19:50:33Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 212.95 +/- 91.83 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 5000000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 10 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'TalesLF/ppo-CartPole-v2' 'batch_size': 512 'minibatch_size': 128} ```
edures/Reinforce-v3
edures
2023-07-28T19:33:35Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T19:33:33Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 16.50 +/- 15.88 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
cgrivaz/FlyBaseGeneAbstractClassifier
cgrivaz
2023-07-28T19:10:29Z
112
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-26T18:37:11Z
--- license: mit widget: - text: "'nord 174 nord. Hedgehog (Hh) and Bone Morphogenetic Proteins (BMPs) pattern the developing Drosophila wing by functioning as short- and long-range morphogens, respectively. Here, we show that a previously unknown Hh-dependent mechanism fine-tunes the activity of BMPs. Through genome-wide expression profiling of the Drosophila wing imaginal discs, we identify nord as a novel target gene of the Hh signaling pathway. Nord is related to the vertebrate Neuron-Derived Neurotrophic Factor (NDNF) involved in congenital hypogonadotropic hypogonadism and several types of cancer. Loss- and gain-of-function analyses implicate Nord in the regulation of wing growth and proper crossvein patterning. At the molecular level, we present biochemical evidence that Nord is a secreted BMP-binding protein and localizes to the extracellular matrix. Nord binds to Decapentaplegic (Dpp) or the heterodimer Dpp-Glass-bottom boat (Gbb) to modulate their release and activity. Furthermore, we demonstrate that Nord is a dosage-dependent BMP modulator, where low levels of Nord promote and high levels inhibit BMP signaling. Taken together, we propose that Hh-induced Nord expression fine-tunes both the range and strength of BMP signaling in the developing Drosophila wing.'" --- # FlyBaseGeneAbstractClassifier This repository hosts the `FlyBaseGeneAbstractClassifier`, a machine learning model designed to classify gene-paper abstract pairs into two labels for Drosophila genes: - LABEL_1: The gene is a topic of the paper. - LABEL_0: The gene is not a topic of the paper. The model was trained on a dataset made from open papers tagged by FlyBase as of February 2022. The training data set consists of 43,000 gene-abstract pairs, and was tested on 4,846 gene-abstract pairs. ## Requirements The model requires the `transformers` library and was trained on a system with the following hardware: - CPU count: 6 - GPU count: 1 - GPU type: NVIDIA A100-SXM4-40GB ## Usage To use the model, follow these steps: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline # Initialize the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained("scibert", model_max_length=512) model = AutoModelForSequenceClassification.from_pretrained("cgrivaz/FlyBaseGeneAbstractClassifier", num_labels=2) # Create a pipeline model_pipeline = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) ``` ## Training and Evaluation Detailed information about the training process and evaluation metrics can be found on the project's Weights & Biases page [here](https://wandb.ai/cgrivaz/gene_tagging/runs/38574bvw). ## Limitations and Future Work As the model is in its initial version, it is likely that there are areas for improvement and potential biases that have not been thoroughly investigated. Users are encouraged to provide feedback and report any issues they encounter during usage. ## Contributing Contributions to improve the model are welcome. Please feel free to open an issue or submit a pull request. ## License mit
NasimB/open_subtitles-rarity-seed
NasimB
2023-07-28T18:57:57Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-28T16:20:06Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: open_subtitles-rarity-seed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # open_subtitles-rarity-seed This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.1682 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.4199 | 0.3 | 500 | 5.3681 | | 5.1161 | 0.61 | 1000 | 4.9701 | | 4.7841 | 0.91 | 1500 | 4.7418 | | 4.5196 | 1.22 | 2000 | 4.5972 | | 4.387 | 1.52 | 2500 | 4.4915 | | 4.2777 | 1.83 | 3000 | 4.3846 | | 4.1105 | 2.13 | 3500 | 4.3262 | | 3.9981 | 2.44 | 4000 | 4.2709 | | 3.9534 | 2.74 | 4500 | 4.2104 | | 3.8755 | 3.05 | 5000 | 4.1790 | | 3.6789 | 3.35 | 5500 | 4.1529 | | 3.6771 | 3.66 | 6000 | 4.1294 | | 3.6542 | 3.96 | 6500 | 4.0885 | | 3.4322 | 4.27 | 7000 | 4.1019 | | 3.4119 | 4.57 | 7500 | 4.0906 | | 3.4004 | 4.88 | 8000 | 4.0777 | | 3.2902 | 5.18 | 8500 | 4.0884 | | 3.227 | 5.49 | 9000 | 4.0850 | | 3.2263 | 5.79 | 9500 | 4.0849 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
grace-pro/redo_no_delete_5e-5_hausa
grace-pro
2023-07-28T18:47:31Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:Davlan/afro-xlmr-base", "base_model:finetune:Davlan/afro-xlmr-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-28T18:28:05Z
--- license: mit base_model: Davlan/afro-xlmr-base tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: redo_no_delete_5e-5_hausa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # redo_no_delete_5e-5_hausa This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1749 - Precision: 0.3010 - Recall: 0.3389 - F1: 0.3188 - Accuracy: 0.9440 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2212 | 1.0 | 729 | 0.1391 | 0.3898 | 0.2023 | 0.2663 | 0.9572 | | 0.1928 | 2.0 | 1458 | 0.1436 | 0.3436 | 0.3058 | 0.3236 | 0.9513 | | 0.1517 | 3.0 | 2187 | 0.1498 | 0.3233 | 0.3384 | 0.3307 | 0.9473 | | 0.1363 | 4.0 | 2916 | 0.1655 | 0.3095 | 0.3353 | 0.3219 | 0.9459 | | 0.1039 | 5.0 | 3645 | 0.1749 | 0.3010 | 0.3389 | 0.3188 | 0.9440 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.1 - Tokenizers 0.13.3
rghosh8/alpaca7B-lora-support-gpt_ccc_base-llama2
rghosh8
2023-07-28T18:47:29Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-28T18:43:08Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0.dev0
arhamk/ppo-LunarLander-v2
arhamk
2023-07-28T18:46:19Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T18:45:56Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 229.95 +/- 16.62 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
frankjoshua/stable-diffusion-xl-base-1.0
frankjoshua
2023-07-28T18:13:05Z
69
1
diffusers
[ "diffusers", "onnx", "safetensors", "text-to-image", "stable-diffusion", "arxiv:2307.01952", "arxiv:2211.01324", "arxiv:2108.01073", "arxiv:2112.10752", "license:openrail++", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2023-07-30T18:58:53Z
--- license: openrail++ tags: - text-to-image - stable-diffusion --- # SD-XL 1.0-base Model Card ![row01](01.png) ## Model ![pipeline](pipeline.png) [SDXL](https://arxiv.org/abs/2307.01952) consists of an [ensemble of experts](https://arxiv.org/abs/2211.01324) pipeline for latent diffusion: In a first step, the base model is used to generate (noisy) latents, which are then further processed with a refinement model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/) specialized for the final denoising steps. Note that the base model can be used as a standalone module. Alternatively, we can use a two-stage pipeline as follows: First, the base model is used to generate latents of the desired output size. In the second step, we use a specialized high-resolution model and apply a technique called SDEdit (https://arxiv.org/abs/2108.01073, also known as "img2img") to the latents generated in the first step, using the same prompt. This technique is slightly slower than the first one, as it requires more function evaluations. Source code is available at https://github.com/Stability-AI/generative-models . ### Model Description - **Developed by:** Stability AI - **Model type:** Diffusion-based text-to-image generative model - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md) - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses two fixed, pretrained text encoders ([OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip) and [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main)). - **Resources for more information:** Check out our [GitHub Repository](https://github.com/Stability-AI/generative-models) and the [SDXL report on arXiv](https://arxiv.org/abs/2307.01952). ### Model Sources For research purposes, we recommned our `generative-models` Github repository (https://github.com/Stability-AI/generative-models), which implements the most popoular diffusion frameworks (both training and inference) and for which new functionalities like distillation will be added over time. [Clipdrop](https://clipdrop.co/stable-diffusion) provides free SDXL inference. - **Repository:** https://github.com/Stability-AI/generative-models - **Demo:** https://clipdrop.co/stable-diffusion ## Evaluation ![comparison](comparison.png) The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0.9 and Stable Diffusion 1.5 and 2.1. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. ### 🧨 Diffusers Make sure to upgrade diffusers to >= 0.19.0: ``` pip install diffusers --upgrade ``` In addition make sure to install `transformers`, `safetensors`, `accelerate` as well as the invisible watermark: ``` pip install invisible_watermark transformers accelerate safetensors ``` To just use the base model, you can run: ```py from diffusers import DiffusionPipeline import torch pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16") pipe.to("cuda") # if using torch < 2.0 # pipe.enable_xformers_memory_efficient_attention() prompt = "An astronaut riding a green horse" images = pipe(prompt=prompt).images[0] ``` To use the whole base + refiner pipeline as an ensemble of experts you can run: ```py from diffusers import DiffusionPipeline import torch # load both base & refiner base = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True ) base.to("cuda") refiner = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-refiner-1.0", text_encoder_2=base.text_encoder_2, vae=base.vae, torch_dtype=torch.float16, use_safetensors=True, variant="fp16", ) refiner.to("cuda") # Define how many steps and what % of steps to be run on each experts (80/20) here n_steps = 40 high_noise_frac = 0.8 prompt = "A majestic lion jumping from a big stone at night" # run both experts image = base( prompt=prompt, num_inference_steps=n_steps, denoising_end=high_noise_frac, output_type="latent", ).images image = refiner( prompt=prompt, num_inference_steps=n_steps, denoising_start=high_noise_frac, image=image, ).images[0] ``` When using `torch >= 2.0`, you can improve the inference speed by 20-30% with torch.compile. Simple wrap the unet with torch compile before running the pipeline: ```py pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) ``` If you are limited by GPU VRAM, you can enable *cpu offloading* by calling `pipe.enable_model_cpu_offload` instead of `.to("cuda")`: ```diff - pipe.to("cuda") + pipe.enable_model_cpu_offload() ``` For more information on how to use Stable Diffusion XL with `diffusers`, please have a look at [the Stable Diffusion XL Docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl). ### Optimum [Optimum](https://github.com/huggingface/optimum) provides a Stable Diffusion pipeline compatible with both [OpenVINO](https://docs.openvino.ai/latest/index.html) and [ONNX Runtime](https://onnxruntime.ai/). #### OpenVINO To install Optimum with the dependencies required for OpenVINO : ```bash pip install optimum[openvino] ``` To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `OVStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set `export=True`. ```diff - from diffusers import StableDiffusionPipeline + from optimum.intel import OVStableDiffusionPipeline model_id = "stabilityai/stable-diffusion-xl-base-1.0" - pipeline = StableDiffusionPipeline.from_pretrained(model_id) + pipeline = OVStableDiffusionPipeline.from_pretrained(model_id) prompt = "A majestic lion jumping from a big stone at night" image = pipeline(prompt).images[0] ``` You can find more examples (such as static reshaping and model compilation) in optimum [documentation](https://huggingface.co/docs/optimum/main/en/intel/inference#stable-diffusion-xl). #### ONNX To install Optimum with the dependencies required for ONNX Runtime inference : ```bash pip install optimum[onnxruntime] ``` To load an ONNX model and run inference with ONNX Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `ORTStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`. ```diff - from diffusers import StableDiffusionPipeline + from optimum.onnxruntime import ORTStableDiffusionPipeline model_id = "stabilityai/stable-diffusion-xl-base-1.0" - pipeline = StableDiffusionPipeline.from_pretrained(model_id) + pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id) prompt = "A majestic lion jumping from a big stone at night" image = pipeline(prompt).images[0] ``` You can find more examples in optimum [documentation](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/models#stable-diffusion-xl). ## Uses ### Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. Excluded uses are described below. ### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model struggles with more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The autoencoding part of the model is lossy. ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
gosorio/robertaSentimentFT_TripAdvisor
gosorio
2023-07-28T18:10:58Z
162
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "en", "dataset:argilla/tripadvisor-hotel-reviews", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-09T02:38:47Z
--- datasets: - argilla/tripadvisor-hotel-reviews language: - en metrics: - accuracy: 0.9111 - F-1 score: 0.9061 pipeline_tag: text-classification --- Sentiment analysis model that uses the Roberta sentiment tweet pre-trained model (from https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment), and fine-tuned on a dataset containing Trip Advisor reviews (from https://www.kaggle.com/datasets/arnabchaki/tripadvisor-reviews-2023). Reviews with 1 or 2 stars are considered 'Negative', 3 stars are 'Neutral', and 4 or 5 stars are 'Positive'. Should be loaded with the following code: ``` # Load pre-trained model and tokenizer model_name = "gosorio/robertaSentimentFT_TripAdvisor" tokenizer_name = "cardiffnlp/twitter-roberta-base-sentiment" device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') tokenizer = AutoTokenizer.from_pretrained(tokenizer_name) model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=3).to(device) ```
zacdennis/PixelCopter
zacdennis
2023-07-28T18:09:11Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T18:08:39Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: PixelCopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 38.30 +/- 19.08 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Aminrabi/diff1000
Aminrabi
2023-07-28T17:42:43Z
0
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dataset:None", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-28T16:57:44Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 datasets: - None tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- # Text-to-image finetuning - Aminrabi/diff1000 This pipeline was finetuned from **CompVis/stable-diffusion-v1-4** on the **None** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['[necklace in flowers shape]']: ![val_imgs_grid](./val_imgs_grid.png) ## Pipeline usage You can use the pipeline like so: ```python from diffusers import DiffusionPipeline import torch pipeline = DiffusionPipeline.from_pretrained("Aminrabi/diff1000", torch_dtype=torch.float16) prompt = "[necklace in flowers shape]" image = pipeline(prompt).images[0] image.save("my_image.png") ``` ## Training info These are the key hyperparameters used during training: * Epochs: 77 * Learning rate: 1e-05 * Batch size: 1 * Gradient accumulation steps: 4 * Image resolution: 512 * Mixed-precision: fp16
MaralGPT/chinkara-7b-improved
MaralGPT
2023-07-28T17:41:24Z
9
0
peft
[ "peft", "license:mit", "region:us" ]
null
2023-07-28T17:08:18Z
--- library_name: peft license: mit --- # Chinkara 7B (Improved) _Chinkara_ is a Large Language Model trained on [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) dataset based on Meta's brand new LLaMa-2 with 7 billion parameters using QLoRa Technique, optimized for small consumer size GPUs. ![logo](chinkara-logo.png) ## Information For more information about the model please visit [prp-e/chinkara](https://github.com/prp-e/chinkara) on Github. ## Inference Guide [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/prp-e/chinkara/blob/main/inference-7b-improved.ipynb) _NOTE: This part is for the time you want to load and infere the model on your local machine. You still need 8GB of VRAM on your GPU. The recommended GPU is at least a 2080!_ ### Installing libraries ``` pip install -U bitsandbytes pip install -U git+https://github.com/huggingface/transformers.git pip install -U git+https://github.com/huggingface/peft.git pip install -U git+https://github.com/huggingface/accelerate.git pip install -U datasets pip install -U einops ``` ### Loading the model ```python import torch from peft import PeftModel from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig model_name = "Trelis/Llama-2-7b-chat-hf-sharded-bf16" adapters_name = 'MaralGPT/chinkara-7b-improved' model = AutoModelForCausalLM.from_pretrained( model_name, load_in_4bit=True, torch_dtype=torch.bfloat16, device_map="auto", max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())}, quantization_config=BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type='nf4' ), ) model = PeftModel.from_pretrained(model, adapters_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ### Setting the model up ```python from peft import LoraConfig, get_peft_model model = PeftModel.from_pretrained(model, adapters_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ### Prompt and inference ```python prompt = "What is the answer to life, universe and everything?" prompt = f"###Human: {prompt} ###Assistant:" inputs = tokenizer(prompt, return_tensors="pt").to("cuda:0") outputs = model.generate(inputs=inputs.input_ids, max_new_tokens=50, temperature=0.5, repetition_penalty=1.0) answer = tokenizer.decode(outputs[0], skip_special_tokens=True) print(answer) ``` ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
pharaouk/hydra-5
pharaouk
2023-07-28T17:36:05Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2023-07-28T01:57:30Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nikbhi/Reinforce-v0
nikbhi
2023-07-28T17:03:57Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T17:03:47Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Alignment-Lab-AI/sophia
Alignment-Lab-AI
2023-07-28T16:58:29Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-28T16:54:15Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
nitro1/llm
nitro1
2023-07-28T16:46:15Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2023-07-28T16:46:15Z
--- license: bigscience-openrail-m ---
sshalini6/whisper-medium-hi
sshalini6
2023-07-28T16:45:44Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-27T16:36:17Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0.dev0
theoldmandthesea/whisper-tiny-finetuned-gtzan-finetuned-gtzan
theoldmandthesea
2023-07-28T16:41:04Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-07-28T15:48:17Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: whisper-tiny-finetuned-gtzan-finetuned-gtzan results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-finetuned-gtzan-finetuned-gtzan This model is a fine-tuned version of [ditwoo/whisper-tiny-finetuned-gtzan](https://huggingface.co/ditwoo/whisper-tiny-finetuned-gtzan) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.7520 - Accuracy: 0.9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2831 | 1.0 | 113 | 1.6496 | 0.74 | | 0.6095 | 2.0 | 226 | 1.0397 | 0.77 | | 0.0629 | 3.0 | 339 | 0.8188 | 0.86 | | 0.0006 | 4.0 | 452 | 0.7378 | 0.88 | | 0.0123 | 5.0 | 565 | 0.7285 | 0.9 | | 0.0002 | 6.0 | 678 | 0.9172 | 0.86 | | 0.0002 | 7.0 | 791 | 0.8244 | 0.89 | | 0.0002 | 8.0 | 904 | 0.7630 | 0.9 | | 0.169 | 9.0 | 1017 | 0.7572 | 0.89 | | 0.0002 | 10.0 | 1130 | 0.7520 | 0.9 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
Madhura/qa-model
Madhura
2023-07-28T16:29:24Z
112
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-28T15:59:32Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - squad model-index: - name: qa-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.6376 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 250 | 2.3091 | | 2.7041 | 2.0 | 500 | 1.7406 | | 2.7041 | 3.0 | 750 | 1.6376 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.1 - Tokenizers 0.13.3
NasimB/bnc_spoken-rarity-seed
NasimB
2023-07-28T16:18:20Z
10
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-28T14:20:11Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: bnc_spoken-rarity-seed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bnc_spoken-rarity-seed This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.1201 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.3595 | 0.29 | 500 | 5.3368 | | 5.0415 | 0.58 | 1000 | 4.9326 | | 4.7185 | 0.88 | 1500 | 4.6926 | | 4.4523 | 1.17 | 2000 | 4.5588 | | 4.3072 | 1.46 | 2500 | 4.4367 | | 4.2083 | 1.75 | 3000 | 4.3335 | | 4.0868 | 2.05 | 3500 | 4.2634 | | 3.8988 | 2.34 | 4000 | 4.2149 | | 3.8845 | 2.63 | 4500 | 4.1600 | | 3.8307 | 2.92 | 5000 | 4.1128 | | 3.646 | 3.22 | 5500 | 4.1082 | | 3.5944 | 3.51 | 6000 | 4.0811 | | 3.5769 | 3.8 | 6500 | 4.0478 | | 3.487 | 4.09 | 7000 | 4.0476 | | 3.3222 | 4.39 | 7500 | 4.0415 | | 3.3218 | 4.68 | 8000 | 4.0281 | | 3.3142 | 4.97 | 8500 | 4.0157 | | 3.161 | 5.26 | 9000 | 4.0304 | | 3.148 | 5.56 | 9500 | 4.0290 | | 3.1368 | 5.85 | 10000 | 4.0281 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
Geotrend/distilbert-base-en-pt-cased
Geotrend
2023-07-28T16:13:06Z
130
0
transformers
[ "transformers", "pytorch", "safetensors", "distilbert", "fill-mask", "multilingual", "dataset:wikipedia", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: multilingual datasets: wikipedia license: apache-2.0 --- # distilbert-base-en-pt-cased We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages. Our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-pt-cased") model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-pt-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). ### How to cite ```bibtex @inproceedings{smallermdistilbert, title={Load What You Need: Smaller Versions of Mutlilingual BERT}, author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire}, booktitle={SustaiNLP / EMNLP}, year={2020} } ``` ## Contact Please contact amine@geotrend.fr for any question, feedback or request.
Geotrend/distilbert-base-en-tr-cased
Geotrend
2023-07-28T16:12:36Z
131
0
transformers
[ "transformers", "pytorch", "safetensors", "distilbert", "fill-mask", "multilingual", "dataset:wikipedia", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: multilingual datasets: wikipedia license: apache-2.0 --- # distilbert-base-en-tr-cased We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages. Our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-tr-cased") model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-tr-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). ### How to cite ```bibtex @inproceedings{smallermdistilbert, title={Load What You Need: Smaller Versions of Mutlilingual BERT}, author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire}, booktitle={SustaiNLP / EMNLP}, year={2020} } ``` ## Contact Please contact amine@geotrend.fr for any question, feedback or request.
nickovchinnikov/bert-finetuned-ner
nickovchinnikov
2023-07-28T16:11:22Z
119
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-28T09:20:37Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: Precision type: precision value: 0.9345670852610707 - name: Recall type: recall value: 0.9518680578929654 - name: F1 type: f1 value: 0.9431382357845589 - name: Accuracy type: accuracy value: 0.9866957084829575 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0581 - Precision: 0.9346 - Recall: 0.9519 - F1: 0.9431 - Accuracy: 0.9867 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0786 | 1.0 | 1756 | 0.0778 | 0.9167 | 0.9359 | 0.9262 | 0.9812 | | 0.0418 | 2.0 | 3512 | 0.0554 | 0.9270 | 0.9461 | 0.9365 | 0.9860 | | 0.0217 | 3.0 | 5268 | 0.0581 | 0.9346 | 0.9519 | 0.9431 | 0.9867 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1 - Datasets 2.12.0 - Tokenizers 0.13.3
anujsahani01/finetuned_mt5
anujsahani01
2023-07-28T16:08:21Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mt5", "text2text-generation", "generated_from_trainer", "base_model:google/mt5-base", "base_model:finetune:google/mt5-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-28T14:19:19Z
--- license: apache-2.0 base_model: google/mt5-base tags: - generated_from_trainer model-index: - name: finetuned_mt5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_mt5 This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 10000 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.1 - Tokenizers 0.13.3
edures/Reinforce-vtest
edures
2023-07-28T16:07:14Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T16:02:45Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-vtest results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: -5.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
azhang1212/angela_shuffle_test
azhang1212
2023-07-28T15:49:15Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:Davlan/afro-xlmr-base", "base_model:finetune:Davlan/afro-xlmr-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-28T14:28:54Z
--- license: mit base_model: Davlan/afro-xlmr-base tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: angela_shuffle_test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # angela_shuffle_test This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1672 - Precision: 0.6214 - Recall: 0.4942 - F1: 0.5505 - Accuracy: 0.9504 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1882 | 1.0 | 1283 | 0.1566 | 0.6823 | 0.4277 | 0.5258 | 0.9518 | | 0.1551 | 2.0 | 2566 | 0.1507 | 0.6940 | 0.4451 | 0.5423 | 0.9533 | | 0.1385 | 3.0 | 3849 | 0.1545 | 0.6903 | 0.4503 | 0.5450 | 0.9532 | | 0.1163 | 4.0 | 5132 | 0.1610 | 0.6288 | 0.4943 | 0.5535 | 0.9507 | | 0.0994 | 5.0 | 6415 | 0.1672 | 0.6214 | 0.4942 | 0.5505 | 0.9504 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.1 - Tokenizers 0.13.3
jordyvl/cdip-small_rvl_cdip-NK1000_kd_CEKD_t2.5_a0.5
jordyvl
2023-07-28T15:40:52Z
164
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-28T07:51:07Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: cdip-small_rvl_cdip-NK1000_kd_CEKD_t2.5_a0.5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cdip-small_rvl_cdip-NK1000_kd_CEKD_t2.5_a0.5 This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4315 - Accuracy: 0.8522 - Brier Loss: 0.2145 - Nll: 1.3474 - F1 Micro: 0.8522 - F1 Macro: 0.8535 - Ece: 0.0573 - Aurc: 0.0300 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 167 | 1.6705 | 0.6378 | 0.4837 | 2.4248 | 0.6378 | 0.6323 | 0.0655 | 0.1457 | | No log | 2.0 | 334 | 1.1423 | 0.7322 | 0.3740 | 1.9847 | 0.7322 | 0.7285 | 0.0695 | 0.0846 | | 1.7909 | 3.0 | 501 | 0.9082 | 0.7682 | 0.3248 | 1.7674 | 0.7682 | 0.7676 | 0.0620 | 0.0642 | | 1.7909 | 4.0 | 668 | 0.8494 | 0.7865 | 0.3082 | 1.7306 | 0.7865 | 0.7904 | 0.0665 | 0.0560 | | 1.7909 | 5.0 | 835 | 0.7837 | 0.798 | 0.2988 | 1.6072 | 0.798 | 0.7953 | 0.0729 | 0.0553 | | 0.4994 | 6.0 | 1002 | 0.6867 | 0.804 | 0.2862 | 1.5014 | 0.804 | 0.8059 | 0.0794 | 0.0471 | | 0.4994 | 7.0 | 1169 | 0.7037 | 0.8157 | 0.2797 | 1.5533 | 0.8157 | 0.8178 | 0.0807 | 0.0478 | | 0.4994 | 8.0 | 1336 | 0.6709 | 0.8163 | 0.2756 | 1.5297 | 0.8163 | 0.8166 | 0.0728 | 0.0478 | | 0.2478 | 9.0 | 1503 | 0.6132 | 0.825 | 0.2576 | 1.4349 | 0.825 | 0.8247 | 0.0728 | 0.0398 | | 0.2478 | 10.0 | 1670 | 0.6389 | 0.8235 | 0.2671 | 1.4455 | 0.8235 | 0.8266 | 0.0746 | 0.0419 | | 0.2478 | 11.0 | 1837 | 0.6043 | 0.8257 | 0.2585 | 1.4609 | 0.8257 | 0.8293 | 0.0752 | 0.0403 | | 0.1683 | 12.0 | 2004 | 0.5639 | 0.8327 | 0.2457 | 1.4470 | 0.8327 | 0.8350 | 0.0676 | 0.0375 | | 0.1683 | 13.0 | 2171 | 0.5665 | 0.8317 | 0.2508 | 1.4054 | 0.8317 | 0.8324 | 0.0731 | 0.0388 | | 0.1683 | 14.0 | 2338 | 0.5505 | 0.8403 | 0.2427 | 1.4059 | 0.8403 | 0.8408 | 0.0649 | 0.0377 | | 0.131 | 15.0 | 2505 | 0.5321 | 0.836 | 0.2428 | 1.4078 | 0.836 | 0.8372 | 0.0684 | 0.0365 | | 0.131 | 16.0 | 2672 | 0.5161 | 0.8373 | 0.2383 | 1.3900 | 0.8373 | 0.8373 | 0.0711 | 0.0368 | | 0.131 | 17.0 | 2839 | 0.5177 | 0.8403 | 0.2371 | 1.3828 | 0.8403 | 0.8413 | 0.0633 | 0.0354 | | 0.1071 | 18.0 | 3006 | 0.5113 | 0.8407 | 0.2377 | 1.3832 | 0.8407 | 0.8432 | 0.0718 | 0.0343 | | 0.1071 | 19.0 | 3173 | 0.4949 | 0.8415 | 0.2332 | 1.3767 | 0.8415 | 0.8428 | 0.0667 | 0.0338 | | 0.1071 | 20.0 | 3340 | 0.4857 | 0.848 | 0.2271 | 1.3664 | 0.848 | 0.8492 | 0.0615 | 0.0338 | | 0.0877 | 21.0 | 3507 | 0.4812 | 0.847 | 0.2283 | 1.3360 | 0.847 | 0.8478 | 0.0602 | 0.0346 | | 0.0877 | 22.0 | 3674 | 0.4715 | 0.8495 | 0.2243 | 1.3761 | 0.8495 | 0.8506 | 0.0560 | 0.0320 | | 0.0877 | 23.0 | 3841 | 0.4622 | 0.8508 | 0.2206 | 1.3584 | 0.8508 | 0.8515 | 0.0557 | 0.0323 | | 0.0694 | 24.0 | 4008 | 0.4432 | 0.8515 | 0.2167 | 1.3653 | 0.8515 | 0.8531 | 0.0555 | 0.0309 | | 0.0694 | 25.0 | 4175 | 0.4467 | 0.8498 | 0.2193 | 1.3499 | 0.8498 | 0.8512 | 0.0581 | 0.0309 | | 0.0694 | 26.0 | 4342 | 0.4412 | 0.8545 | 0.2162 | 1.3535 | 0.8545 | 0.8560 | 0.0534 | 0.0306 | | 0.0586 | 27.0 | 4509 | 0.4402 | 0.8498 | 0.2180 | 1.3390 | 0.8498 | 0.8510 | 0.0597 | 0.0309 | | 0.0586 | 28.0 | 4676 | 0.4408 | 0.8522 | 0.2174 | 1.3568 | 0.8522 | 0.8536 | 0.0576 | 0.0306 | | 0.0586 | 29.0 | 4843 | 0.4391 | 0.851 | 0.2168 | 1.3429 | 0.851 | 0.8523 | 0.0585 | 0.0305 | | 0.0549 | 30.0 | 5010 | 0.4371 | 0.853 | 0.2160 | 1.3389 | 0.853 | 0.8543 | 0.0573 | 0.0303 | | 0.0549 | 31.0 | 5177 | 0.4382 | 0.8498 | 0.2168 | 1.3486 | 0.8498 | 0.8513 | 0.0602 | 0.0304 | | 0.0549 | 32.0 | 5344 | 0.4372 | 0.853 | 0.2166 | 1.3501 | 0.853 | 0.8540 | 0.0591 | 0.0306 | | 0.0527 | 33.0 | 5511 | 0.4379 | 0.852 | 0.2156 | 1.3546 | 0.852 | 0.8531 | 0.0576 | 0.0304 | | 0.0527 | 34.0 | 5678 | 0.4353 | 0.8532 | 0.2154 | 1.3381 | 0.8532 | 0.8543 | 0.0574 | 0.0302 | | 0.0527 | 35.0 | 5845 | 0.4347 | 0.8525 | 0.2148 | 1.3550 | 0.8525 | 0.8535 | 0.0591 | 0.0304 | | 0.0511 | 36.0 | 6012 | 0.4311 | 0.8542 | 0.2141 | 1.3233 | 0.8542 | 0.8552 | 0.0572 | 0.0299 | | 0.0511 | 37.0 | 6179 | 0.4323 | 0.852 | 0.2150 | 1.3332 | 0.852 | 0.8532 | 0.0586 | 0.0302 | | 0.0511 | 38.0 | 6346 | 0.4321 | 0.8515 | 0.2152 | 1.3382 | 0.8515 | 0.8527 | 0.0583 | 0.0299 | | 0.0494 | 39.0 | 6513 | 0.4335 | 0.8495 | 0.2152 | 1.3385 | 0.8495 | 0.8511 | 0.0593 | 0.0303 | | 0.0494 | 40.0 | 6680 | 0.4323 | 0.852 | 0.2146 | 1.3603 | 0.852 | 0.8533 | 0.0576 | 0.0299 | | 0.0494 | 41.0 | 6847 | 0.4309 | 0.8512 | 0.2143 | 1.3448 | 0.8512 | 0.8525 | 0.0570 | 0.0299 | | 0.0477 | 42.0 | 7014 | 0.4327 | 0.8525 | 0.2149 | 1.3439 | 0.8525 | 0.8539 | 0.0580 | 0.0299 | | 0.0477 | 43.0 | 7181 | 0.4309 | 0.8532 | 0.2140 | 1.3406 | 0.8532 | 0.8544 | 0.0560 | 0.0299 | | 0.0477 | 44.0 | 7348 | 0.4308 | 0.8528 | 0.2141 | 1.3404 | 0.8528 | 0.8540 | 0.0573 | 0.0299 | | 0.0466 | 45.0 | 7515 | 0.4317 | 0.8525 | 0.2147 | 1.3402 | 0.8525 | 0.8538 | 0.0580 | 0.0299 | | 0.0466 | 46.0 | 7682 | 0.4317 | 0.8535 | 0.2144 | 1.3475 | 0.8535 | 0.8547 | 0.0553 | 0.0298 | | 0.0466 | 47.0 | 7849 | 0.4314 | 0.8525 | 0.2143 | 1.3479 | 0.8525 | 0.8537 | 0.0559 | 0.0299 | | 0.0465 | 48.0 | 8016 | 0.4314 | 0.8525 | 0.2143 | 1.3479 | 0.8525 | 0.8538 | 0.0559 | 0.0299 | | 0.0465 | 49.0 | 8183 | 0.4316 | 0.8528 | 0.2145 | 1.3471 | 0.8528 | 0.8540 | 0.0573 | 0.0299 | | 0.0465 | 50.0 | 8350 | 0.4315 | 0.8522 | 0.2145 | 1.3474 | 0.8522 | 0.8535 | 0.0573 | 0.0300 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
Sookeyy/ppo-LunarLander-v2
Sookeyy
2023-07-28T15:32:14Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T13:50:05Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 274.44 +/- 19.95 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
mauroluiz/Criativo
mauroluiz
2023-07-28T15:30:39Z
0
0
null
[ "region:us" ]
null
2023-07-28T15:16:30Z
Insanely detailed and elaborate jungle scene in a glass globe
xiaol/ruotangwx-rwkv-7b-novel-32k
xiaol
2023-07-28T15:22:19Z
0
8
null
[ "license:apache-2.0", "region:us" ]
null
2023-07-27T13:37:31Z
--- license: apache-2.0 --- based on RWKV 4 CHNtuned 7B world ,use only 10 samples to train ,used to prove the concept, can use runner to test this model, this model still knows multi-language. ------------------------------------ 与 若棠文学 共同联名发布的测试模型,只使用了10条数据,在中文特化模型 CHNtuned 7B上进行了32k上下文长度的 全量微调 可以用runner进行测试,因为测试的时候并未对格式严格修复,runner对话模型下会对双换行自动整理,尽量使用续写功能。 如果并不会调试以及并不知道怎么用,不建议使用 测试prompt可以在runner的续写中进行 训练的格式: User: 请帮我通过以下内容中的大纲,人设,背景,情节,叙事节奏,扩写成一篇完整的小说: ``` {写作大纲,设定,背景等,可以多用换行分割} ``` Assistant: ![image.png](https://cdn-uploads.huggingface.co/production/uploads/6176b32847ee6431f632981e/0UHZUkGV-pE_GdgQWcr0R.png) ![image.png](https://cdn-uploads.huggingface.co/production/uploads/6176b32847ee6431f632981e/E_KB7slK_TH-HCYtnOT9G.png) 例子在examples下边 https://huggingface.co/xiaol/ruotangwx-rwkv-7b-novel-32k/blob/main/novel-exmples/c3-n5.txt
Christabelle/thesis-concept-art
Christabelle
2023-07-28T15:04:50Z
0
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:CompVis/stable-diffusion-v1-4", "base_model:adapter:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-26T20:18:06Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - Christabelle/thesis-concept-art These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the Christabelle/thesis-concept-art-train dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
liuyt75/t5-large_prefix_tuning_sentences_75agree_3
liuyt75
2023-07-28T14:54:56Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-28T14:54:54Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
NicholasSynovic/AutoTrain-LUC-COMP429-VEAA-Classification
NicholasSynovic
2023-07-28T14:54:54Z
110
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "autotrain", "en", "dataset:NicholasSynovic/autotrain-data-luc-comp429-victorian-authorship-classification", "license:agpl-3.0", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-04-25T17:37:57Z
--- tags: - autotrain - text-classification language: - en widget: - text: I love AutoTrain datasets: - NicholasSynovic/autotrain-data-luc-comp429-victorian-authorship-classification co2_eq_emissions: emissions: 4.1359796275464005 license: agpl-3.0 metrics: - accuracy - f1 - recall - bertscore pipeline_tag: text-classification --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 52472123757 - CO2 Emissions (in grams): 4.1360 This model reuses and extends a Bert model trained on [NicholasSynovic/Free-AutoTrain-VEAA](https://huggingface.co/datasets/NicholasSynovic/Free-AutoTrain-VEAA) ## Validation Metrics - Loss: 1.425 - Accuracy: 0.636 - Macro F1: 0.504 - Micro F1: 0.636 - Weighted F1: 0.624 - Macro Precision: 0.523 - Micro Precision: 0.636 - Weighted Precision: 0.630 - Macro Recall: 0.508 - Micro Recall: 0.636 - Weighted Recall: 0.636 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/NicholasSynovic/autotrain-luc-comp429-victorian-authorship-classification-52472123757 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("NicholasSynovic/AutoTrain-LUC-COMP429-VEAA-Classification", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("NicholasSynovic/autotrain-luc-comp429-victorian-authorship-classification-52472123757", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
manuu01/a2c-PandaReachDense-v2
manuu01
2023-07-28T14:47:49Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T14:44:34Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -0.92 +/- 0.17 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ZhiguangHan/test-clm
ZhiguangHan
2023-07-28T14:42:03Z
182
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-28T14:02:13Z
--- license: apache-2.0 base_model: distilgpt2 tags: - generated_from_trainer model-index: - name: test-clm results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-clm This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6319 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.5547 | 1.0 | 2334 | 3.6373 | | 3.4926 | 2.0 | 4668 | 3.6361 | | 3.4692 | 3.0 | 7002 | 3.6319 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.1 - Tokenizers 0.13.3
greg-szopinski/Reinforce-pixelcopter-128
greg-szopinski
2023-07-28T14:38:21Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T14:36:26Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-pixelcopter-128 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 33.80 +/- 16.58 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_easyocr_2023-07-23_g025
jordyvl
2023-07-28T14:26:36Z
106
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-23T21:50:37Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: EElayoutlmv3_jordyvl_rvl_cdip_easyocr_2023-07-23_g025 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EElayoutlmv3_jordyvl_rvl_cdip_easyocr_2023-07-23_g025 This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2244 - Accuracy: 0.9394 - Exit 0 Accuracy: 0.2721 - Exit 1 Accuracy: 0.4875 - Exit 2 Accuracy: 0.8051 - Exit 3 Accuracy: 0.8840 - Exit 4 Accuracy: 0.9339 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 6 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 24 - total_train_batch_size: 144 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:| | 0.5909 | 1.0 | 2222 | 0.2945 | 0.9158 | 0.2021 | 0.3569 | 0.7091 | 0.8143 | 0.9092 | | 0.4951 | 2.0 | 4444 | 0.2469 | 0.9292 | 0.2262 | 0.4336 | 0.7677 | 0.8614 | 0.9258 | | 0.4279 | 3.0 | 6666 | 0.2281 | 0.9336 | 0.2530 | 0.4682 | 0.7898 | 0.8768 | 0.9302 | | 0.39 | 4.0 | 8888 | 0.2241 | 0.9385 | 0.2600 | 0.483 | 0.8008 | 0.8827 | 0.9328 | | 0.3602 | 5.0 | 11110 | 0.2244 | 0.9394 | 0.2721 | 0.4875 | 0.8051 | 0.8840 | 0.9339 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
Pierre-Arthur/distilroberta_base_eurolex_mlm_model
Pierre-Arthur
2023-07-28T14:22:58Z
171
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "dataset:eurlex_resources", "base_model:distilbert/distilroberta-base", "base_model:finetune:distilbert/distilroberta-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-28T11:29:42Z
--- license: apache-2.0 base_model: distilroberta-base tags: - generated_from_trainer datasets: - eurlex_resources model-index: - name: distilroberta_base_eurolex_mlm_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta_base_eurolex_mlm_model This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the eurlex_resources dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 38 | nan | | No log | 2.0 | 76 | nan | | No log | 3.0 | 114 | nan | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.1 - Tokenizers 0.13.3
vogelweide85/my-awesome-setfit-model
vogelweide85
2023-07-28T14:18:20Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-07-28T14:17:32Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # C:\Users\THALAM~1\AppData\Local\Temp\3\tmp343r92fi\vogelweide85\my-awesome-setfit-model This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("C:\Users\THALAM~1\AppData\Local\Temp\3\tmp343r92fi\vogelweide85\my-awesome-setfit-model") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
liuyt75/t5-large_prefix_tuning_sentences_66agree_15
liuyt75
2023-07-28T14:18:15Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-28T14:18:14Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
Evan-Lin/Bart-Yelp-rougelastbatch-attractive1-keywordmax1-decoding-test
Evan-Lin
2023-07-28T14:17:57Z
49
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "trl", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-07-28T14:11:44Z
--- license: apache-2.0 tags: - trl - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="Evan-Lin//tmp/tmpsmtznmiz/Evan-Lin/Bart-Yelp-rougelastbatch-attractive1-keywordmax1-decoding-test") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmpsmtznmiz/Evan-Lin/Bart-Yelp-rougelastbatch-attractive1-keywordmax1-decoding-test") model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmpsmtznmiz/Evan-Lin/Bart-Yelp-rougelastbatch-attractive1-keywordmax1-decoding-test") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
JunejaeKim/roberta-large-lora-token-classification
JunejaeKim
2023-07-28T14:13:06Z
5
0
peft
[ "peft", "region:us" ]
null
2023-07-28T14:13:01Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
Lajonbot/tableBeluga-7B-instruct-pl-lora_unload
Lajonbot
2023-07-28T14:12:09Z
1,396
2
transformers
[ "transformers", "pytorch", "llama", "text-generation", "facebook", "meta", "llama-2", "pl", "dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-28T13:59:47Z
--- language: - pl datasets: - Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish license: other model_type: llama-2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 ---
kai824/Taxi-V3
kai824
2023-07-28T14:09:01Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T14:05:37Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-V3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="kai824/Taxi-V3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
twbrandon7/rl-course-unit1
twbrandon7
2023-07-28T14:02:48Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T14:01:02Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 249.84 +/- 17.46 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Lajonbot/tableBeluga-7B-instruct-pl-lora_GGML
Lajonbot
2023-07-28T13:59:47Z
0
0
null
[ "facebook", "meta", "pytorch", "llama", "llama-2", "text-generation", "pl", "dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish", "license:other", "region:us" ]
text-generation
2023-07-28T13:49:06Z
--- language: - pl datasets: - Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish license: other model_type: llama-2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 ---
Naruke/ppo-Pyramidsv1
Naruke
2023-07-28T13:50:56Z
8
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-28T13:25:08Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Naruke/ppo-Pyramidsv1 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Lajonbot/tableBeluga-7B-instruct-pl-lora_adapter_model
Lajonbot
2023-07-28T13:49:05Z
0
0
null
[ "facebook", "meta", "pytorch", "llama", "llama-2", "text-generation", "pl", "dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish", "license:other", "region:us" ]
text-generation
2023-07-28T13:49:04Z
--- language: - pl datasets: - Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish license: other model_type: llama-2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 ---
FinchResearch/llama2-archimedes-7b-lora
FinchResearch
2023-07-28T13:47:43Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-28T13:47:36Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
LarryAIDraw/ayanami
LarryAIDraw
2023-07-28T13:42:39Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-28T13:32:41Z
--- license: creativeml-openrail-m --- https://civitai.com/models/117469/ayanami-azur-lane
NasimB/aochildes-rarity-seed
NasimB
2023-07-28T13:37:53Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-28T04:40:59Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: aochildes-rarity-seed results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # aochildes-rarity-seed This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.1164 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.3514 | 0.29 | 500 | 5.3390 | | 5.0369 | 0.59 | 1000 | 4.9222 | | 4.7211 | 0.88 | 1500 | 4.6884 | | 4.4532 | 1.17 | 2000 | 4.5398 | | 4.3029 | 1.47 | 2500 | 4.4318 | | 4.2095 | 1.76 | 3000 | 4.3295 | | 4.0772 | 2.05 | 3500 | 4.2615 | | 3.9042 | 2.35 | 4000 | 4.2130 | | 3.8732 | 2.64 | 4500 | 4.1604 | | 3.8358 | 2.93 | 5000 | 4.1110 | | 3.641 | 3.23 | 5500 | 4.1105 | | 3.5952 | 3.52 | 6000 | 4.0799 | | 3.5797 | 3.81 | 6500 | 4.0466 | | 3.465 | 4.11 | 7000 | 4.0458 | | 3.3242 | 4.4 | 7500 | 4.0451 | | 3.3146 | 4.69 | 8000 | 4.0309 | | 3.3112 | 4.99 | 8500 | 4.0183 | | 3.1524 | 5.28 | 9000 | 4.0325 | | 3.1343 | 5.57 | 9500 | 4.0319 | | 3.1354 | 5.87 | 10000 | 4.0309 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
efainman/Pyramids
efainman
2023-07-28T13:20:59Z
3
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-28T13:20:45Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: efainman/Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
jcy204/heat_model
jcy204
2023-07-28T13:14:28Z
61
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-28T13:07:33Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: jcy204/heat_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # jcy204/heat_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2668 - Validation Loss: 0.5923 - Train Accuracy: 0.7941 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4660, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.5977 | 0.5312 | 0.7844 | 0 | | 0.3974 | 0.5389 | 0.7872 | 1 | | 0.2668 | 0.5923 | 0.7941 | 2 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.12.0 - Datasets 2.14.1 - Tokenizers 0.13.3
liuyt75/t5-large_prefix_tuning_sentences_66agree_5
liuyt75
2023-07-28T13:07:09Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-28T13:07:08Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
jcy204/cold_model
jcy204
2023-07-28T13:06:43Z
61
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-28T13:01:33Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: jcy204/cold_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # jcy204/cold_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3079 - Validation Loss: 0.6510 - Train Accuracy: 0.7604 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3185, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.6912 | 0.5784 | 0.7513 | 0 | | 0.4713 | 0.5637 | 0.7641 | 1 | | 0.3079 | 0.6510 | 0.7604 | 2 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.12.0 - Datasets 2.14.1 - Tokenizers 0.13.3
guyhadad01/ppo-LunarLander-v2
guyhadad01
2023-07-28T13:06:17Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-28T13:05:58Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 282.00 +/- 17.53 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Maldopast/distilhubert-finetuned-gtzan
Maldopast
2023-07-28T12:49:12Z
157
0
transformers
[ "transformers", "pytorch", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-07-28T12:30:11Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert-finetuned-gtzan results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-gtzan This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.7537 - Accuracy: 0.88 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9647 | 1.0 | 113 | 1.8614 | 0.52 | | 1.3987 | 2.0 | 226 | 1.3098 | 0.61 | | 0.8809 | 3.0 | 339 | 0.8631 | 0.76 | | 0.7643 | 4.0 | 452 | 0.8114 | 0.77 | | 0.5958 | 5.0 | 565 | 0.7013 | 0.81 | | 0.4405 | 6.0 | 678 | 0.5860 | 0.84 | | 0.2183 | 7.0 | 791 | 0.6114 | 0.82 | | 0.1587 | 8.0 | 904 | 0.5141 | 0.85 | | 0.0899 | 9.0 | 1017 | 0.4760 | 0.87 | | 0.0575 | 10.0 | 1130 | 0.5759 | 0.86 | | 0.0647 | 11.0 | 1243 | 0.6467 | 0.86 | | 0.0061 | 12.0 | 1356 | 0.6372 | 0.88 | | 0.0029 | 13.0 | 1469 | 0.6721 | 0.88 | | 0.0018 | 14.0 | 1582 | 0.7565 | 0.89 | | 0.0013 | 15.0 | 1695 | 0.7537 | 0.88 | ### Framework versions - Transformers 4.30.1 - Pytorch 2.0.1+cu117 - Datasets 2.14.0 - Tokenizers 0.13.3