modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-04 06:26:56
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
538 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-04 06:26:41
card
stringlengths
11
1.01M
chatchitsanu/lunarrrr1111
chatchitsanu
2023-08-07T14:54:39Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-07T14:54:01Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 288.19 +/- 19.24 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Fredithefish/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4
Fredithefish
2023-08-07T14:45:56Z
1,468
3
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "en", "dataset:Fredithefish/Instruction-Tuning-with-GPT-4-RedPajama-Chat", "license:cc", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-05-16T15:07:11Z
--- license: cc datasets: - Fredithefish/Instruction-Tuning-with-GPT-4-RedPajama-Chat language: - en inference: false --- <html> <head> <style> .alert { padding: 15px; background-color: #f44336; color: white; } </style> </head> <body> <div class="alert"> <strong>Warning:</strong> This fine-tuned model has only undergone 200 steps of fine-tuning and may not be reliable. The final model will not be released. </div> </body> </html> <br> # RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4 RedPajama-INCITE-Chat-3B Model finetuned <a href="https://huggingface.co/datasets/Fredithefish/Instruction-Tuning-with-GPT-4-RedPajama-Chat">on this dataset</a> ## Reproduction The code for the finetuning of this model can be found at https://github.com/fredi-python/Fine-tune-RedPajama-Chat-3B ## Usage and License Notices The Model is intended and licensed for research use only. The model is under the CC BY NC 4.0 license (allowing only non-commercial use)
Alexanderrotela2000/Ardev-model
Alexanderrotela2000
2023-08-07T14:35:54Z
0
0
null
[ "text-generation", "es", "dataset:roneneldan/TinyStories", "arxiv:1910.09700", "license:openrail", "region:us" ]
text-generation
2023-08-07T14:07:20Z
--- license: openrail datasets: - roneneldan/TinyStories language: - es metrics: - character pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jordyvl/vit-base_rvl-cdip-small_rvl_cdip-NK1000_simkd
jordyvl
2023-08-07T14:34:38Z
163
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-29T07:15:36Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: vit-base_rvl-cdip-small_rvl_cdip-NK1000_simkd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base_rvl-cdip-small_rvl_cdip-NK1000_simkd This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0540 - Accuracy: 0.859 - Brier Loss: 0.2977 - Nll: 1.1492 - F1 Micro: 0.859 - F1 Macro: 0.8598 - Ece: 0.2784 - Aurc: 0.0325 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | 0.0787 | 1.0 | 1000 | 0.0770 | 0.2978 | 0.9091 | 2.9717 | 0.2978 | 0.2516 | 0.2150 | 0.5174 | | 0.0697 | 2.0 | 2000 | 0.0679 | 0.6372 | 0.6886 | 1.8065 | 0.6372 | 0.6302 | 0.4021 | 0.1388 | | 0.0655 | 3.0 | 3000 | 0.0645 | 0.7492 | 0.5738 | 1.7388 | 0.7492 | 0.7460 | 0.4215 | 0.0921 | | 0.0628 | 4.0 | 4000 | 0.0631 | 0.752 | 0.5394 | 1.7446 | 0.752 | 0.7551 | 0.3922 | 0.0837 | | 0.0611 | 5.0 | 5000 | 0.0612 | 0.768 | 0.4928 | 1.5830 | 0.768 | 0.7700 | 0.3710 | 0.0655 | | 0.0593 | 6.0 | 6000 | 0.0609 | 0.7598 | 0.4655 | 1.5730 | 0.7598 | 0.7667 | 0.3228 | 0.0802 | | 0.0578 | 7.0 | 7000 | 0.0585 | 0.8063 | 0.4195 | 1.4053 | 0.8062 | 0.8065 | 0.3459 | 0.0521 | | 0.0566 | 8.0 | 8000 | 0.0581 | 0.8073 | 0.3997 | 1.2957 | 0.8073 | 0.8084 | 0.3207 | 0.0538 | | 0.0557 | 9.0 | 9000 | 0.0571 | 0.8287 | 0.3810 | 1.3269 | 0.8287 | 0.8301 | 0.3307 | 0.0473 | | 0.0554 | 10.0 | 10000 | 0.0573 | 0.8115 | 0.3780 | 1.3469 | 0.8115 | 0.8128 | 0.3011 | 0.0508 | | 0.0546 | 11.0 | 11000 | 0.0563 | 0.8395 | 0.3549 | 1.2882 | 0.8395 | 0.8401 | 0.3197 | 0.0386 | | 0.0541 | 12.0 | 12000 | 0.0558 | 0.839 | 0.3426 | 1.2653 | 0.839 | 0.8401 | 0.3014 | 0.0394 | | 0.0536 | 13.0 | 13000 | 0.0553 | 0.8465 | 0.3259 | 1.1941 | 0.8465 | 0.8473 | 0.2980 | 0.0357 | | 0.0537 | 14.0 | 14000 | 0.0559 | 0.8303 | 0.3499 | 1.2460 | 0.8303 | 0.8338 | 0.2955 | 0.0427 | | 0.0532 | 15.0 | 15000 | 0.0551 | 0.8445 | 0.3296 | 1.1799 | 0.8445 | 0.8453 | 0.2990 | 0.0360 | | 0.0529 | 16.0 | 16000 | 0.0549 | 0.845 | 0.3224 | 1.1801 | 0.845 | 0.8456 | 0.2895 | 0.0364 | | 0.0527 | 17.0 | 17000 | 0.0549 | 0.849 | 0.3264 | 1.1725 | 0.849 | 0.8503 | 0.2991 | 0.0363 | | 0.0526 | 18.0 | 18000 | 0.0547 | 0.8518 | 0.3170 | 1.1755 | 0.8518 | 0.8527 | 0.2943 | 0.0334 | | 0.0524 | 19.0 | 19000 | 0.0546 | 0.8458 | 0.3213 | 1.1417 | 0.8458 | 0.8466 | 0.2917 | 0.0344 | | 0.0522 | 20.0 | 20000 | 0.0544 | 0.8545 | 0.3105 | 1.1512 | 0.8545 | 0.8542 | 0.2891 | 0.0333 | | 0.052 | 21.0 | 21000 | 0.0542 | 0.855 | 0.3120 | 1.1403 | 0.855 | 0.8555 | 0.2940 | 0.0333 | | 0.0518 | 22.0 | 22000 | 0.0542 | 0.854 | 0.3096 | 1.1533 | 0.854 | 0.8545 | 0.2893 | 0.0319 | | 0.0517 | 23.0 | 23000 | 0.0541 | 0.8545 | 0.3098 | 1.1445 | 0.8545 | 0.8556 | 0.2920 | 0.0315 | | 0.0516 | 24.0 | 24000 | 0.0540 | 0.8578 | 0.3097 | 1.1273 | 0.8578 | 0.8586 | 0.2958 | 0.0315 | | 0.0514 | 25.0 | 25000 | 0.0540 | 0.8532 | 0.3076 | 1.1579 | 0.8532 | 0.8533 | 0.2849 | 0.0342 | | 0.0513 | 26.0 | 26000 | 0.0540 | 0.855 | 0.3055 | 1.1269 | 0.855 | 0.8563 | 0.2855 | 0.0325 | | 0.0511 | 27.0 | 27000 | 0.0538 | 0.8565 | 0.3029 | 1.1571 | 0.8565 | 0.8572 | 0.2827 | 0.0334 | | 0.051 | 28.0 | 28000 | 0.0538 | 0.8598 | 0.3012 | 1.1409 | 0.8598 | 0.8604 | 0.2851 | 0.0317 | | 0.0509 | 29.0 | 29000 | 0.0537 | 0.86 | 0.3003 | 1.1525 | 0.8600 | 0.8603 | 0.2839 | 0.0323 | | 0.0508 | 30.0 | 30000 | 0.0537 | 0.8575 | 0.3024 | 1.1430 | 0.8575 | 0.8585 | 0.2849 | 0.0319 | | 0.0507 | 31.0 | 31000 | 0.0537 | 0.8595 | 0.3015 | 1.1454 | 0.8595 | 0.8603 | 0.2859 | 0.0311 | | 0.0507 | 32.0 | 32000 | 0.0537 | 0.8598 | 0.3005 | 1.1463 | 0.8598 | 0.8603 | 0.2847 | 0.0316 | | 0.0506 | 33.0 | 33000 | 0.0537 | 0.8598 | 0.2966 | 1.1392 | 0.8598 | 0.8605 | 0.2800 | 0.0309 | | 0.0506 | 34.0 | 34000 | 0.0537 | 0.8562 | 0.3018 | 1.1442 | 0.8562 | 0.8574 | 0.2813 | 0.0327 | | 0.0505 | 35.0 | 35000 | 0.0537 | 0.855 | 0.2995 | 1.1402 | 0.855 | 0.8556 | 0.2790 | 0.0324 | | 0.0505 | 36.0 | 36000 | 0.0537 | 0.8575 | 0.2980 | 1.1324 | 0.8575 | 0.8582 | 0.2783 | 0.0314 | | 0.0504 | 37.0 | 37000 | 0.0538 | 0.8562 | 0.2981 | 1.1429 | 0.8562 | 0.8570 | 0.2770 | 0.0320 | | 0.0503 | 38.0 | 38000 | 0.0538 | 0.8565 | 0.2997 | 1.1319 | 0.8565 | 0.8573 | 0.2795 | 0.0324 | | 0.0503 | 39.0 | 39000 | 0.0538 | 0.857 | 0.2988 | 1.1447 | 0.857 | 0.8578 | 0.2791 | 0.0320 | | 0.0502 | 40.0 | 40000 | 0.0538 | 0.8588 | 0.2982 | 1.1409 | 0.8588 | 0.8595 | 0.2798 | 0.0320 | | 0.0502 | 41.0 | 41000 | 0.0538 | 0.8572 | 0.2982 | 1.1455 | 0.8572 | 0.8580 | 0.2781 | 0.0319 | | 0.0502 | 42.0 | 42000 | 0.0538 | 0.8602 | 0.2979 | 1.1357 | 0.8602 | 0.8609 | 0.2809 | 0.0320 | | 0.0501 | 43.0 | 43000 | 0.0539 | 0.8568 | 0.2987 | 1.1462 | 0.8568 | 0.8574 | 0.2787 | 0.0322 | | 0.0501 | 44.0 | 44000 | 0.0539 | 0.8595 | 0.2974 | 1.1456 | 0.8595 | 0.8602 | 0.2789 | 0.0322 | | 0.0501 | 45.0 | 45000 | 0.0539 | 0.8592 | 0.2980 | 1.1460 | 0.8592 | 0.8601 | 0.2792 | 0.0322 | | 0.05 | 46.0 | 46000 | 0.0539 | 0.8588 | 0.2979 | 1.1441 | 0.8588 | 0.8596 | 0.2787 | 0.0322 | | 0.05 | 47.0 | 47000 | 0.0540 | 0.8592 | 0.2983 | 1.1501 | 0.8592 | 0.8600 | 0.2793 | 0.0324 | | 0.05 | 48.0 | 48000 | 0.0540 | 0.8588 | 0.2980 | 1.1462 | 0.8588 | 0.8595 | 0.2787 | 0.0324 | | 0.05 | 49.0 | 49000 | 0.0540 | 0.8598 | 0.2978 | 1.1507 | 0.8598 | 0.8604 | 0.2793 | 0.0324 | | 0.05 | 50.0 | 50000 | 0.0540 | 0.859 | 0.2977 | 1.1492 | 0.859 | 0.8598 | 0.2784 | 0.0325 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
yannicake/article-classifier-setfit
yannicake
2023-08-07T14:33:58Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-08-07T14:33:12Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # yannicake/article-classifier-setfit This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("yannicake/article-classifier-setfit") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
ctu-aic/flan-t5-large
ctu-aic
2023-08-07T14:27:58Z
71
0
transformers
[ "transformers", "pytorch", "t5", "feature-extraction", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2023-08-07T14:00:29Z
This model's tokenizer is extended with CS, SK and PL accents using the following code: ````python from transformers import ( AutoModel, AutoTokenizer, ) model_id = "google/flan-t5-large" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModel.from_pretrained(model_id) accents = "áčďéěíňóřšťúůýž" # CS accents += "ąćęłńóśźż" # PL accents += "áäčďéíĺľňóôŕšťúýž" # SK accents += accents.upper() accents = set(c for c in accents) new_tokens = accents - set(tokenizer.vocab.keys()) tokenizer.add_tokens(list(new_tokens)) model.resize_token_embeddings(len(tokenizer)) ````
Junlaii/wiki_dister_head_LSTM_fintune_final
Junlaii
2023-08-07T14:17:37Z
0
0
fastai
[ "fastai", "region:us" ]
null
2023-08-07T14:17:28Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
Gustar8/Llama-QA-fine-tuned
Gustar8
2023-08-07T14:16:37Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-07T14:16:22Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
jwb220/q-FrozenLake-v1-4x4-noSlippery
jwb220
2023-08-07T14:15:35Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-08-07T14:15:34Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="jwb220/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
exyou/nexodus-flan-t5
exyou
2023-08-07T14:09:53Z
3
0
peft
[ "peft", "region:us" ]
null
2023-07-31T18:29:14Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0
Ulomaster/GPTNeo-Nawasena-small
Ulomaster
2023-08-07T14:06:58Z
116
0
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-08-01T04:10:39Z
--- language: - en library_name: transformers --- # Nawasena model ### Model Description <!-- Provide a longer summary of what this model is. --> This is a story-building model that is trained using a collection of Japanese light novels translated into English. This model was created with inspiration from the griffin AI Dungeon model. The purpose of making this model is as an entertainment machine that makes stories interesting and creative. Unfortunately, due to cost and computational power limitations, we were only able to train this model for 12 hours, and even then with a dataset of no more than 100 MB. - **Developed by:** Hll-AI Production - **Model type:** Text Generation - **Language(s):** English - **Finetuned from model:** GPT-Neo ### Information <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> This model is not too big, even somewhat unstable. We train it in just 12 hours because the number of computers is limited. However, we hope that in the future this language model will be even better. The weakness lies in the limited and very small number of Context Sizes. For now, this model has not been able to do its job very well. But you can try it or train it again to make it even better. ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> We use our own created dataset. Its name is dataset_light_novel_EN. Its size is around 38.7 MB. That's very small, right? Updates: The dataset is now 73.4mb in size It's still small. ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> - **Hardware Type:** GPU T4 15GB - **Hours used:** 12 hours - **Cloud Provider:** Google colab - **Carbon Emitted:** 0.47 kg Because we use the free version of Google Colab, so we only generate around 0.47 kg of emissions. I'm not really sure, but this number is quite a lot, Maybe? ## Model Card Authors Hll-AI Production
tilyupo/t5-base-trivia-gpu-ca2q
tilyupo
2023-08-07T14:01:10Z
61
0
transformers
[ "transformers", "tf", "t5", "text2text-generation", "generated_from_keras_callback", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-08-07T14:00:41Z
--- license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_keras_callback model-index: - name: t5-base-trivia-gpu-ca2q results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-trivia-gpu-ca2q This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.9662 - Validation Loss: 1.2201 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adafactor', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.0002, 'beta_2_decay': -0.8, 'epsilon_1': 1e-30, 'epsilon_2': 0.001, 'clip_threshold': 1.0, 'relative_step': False} - training_precision: mixed_bfloat16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.3940 | 1.2142 | 0 | | 1.1260 | 1.2087 | 1 | | 0.9662 | 1.2201 | 2 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.13.0 - Datasets 2.14.3 - Tokenizers 0.13.3
patrickvonplaten/lora-trained-xl
patrickvonplaten
2023-08-07T13:48:23Z
1
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-08-04T14:25:43Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks dog tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - patrickvonplaten/lora-trained-xl These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
nerdylive/deberta-zeroshot
nerdylive
2023-08-07T13:42:33Z
61
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-05T03:34:10Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: nerdylive/deberta-zeroshot results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # nerdylive/deberta-zeroshot This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2575 - Validation Loss: 0.1900 - Train Accuracy: {'accuracy': 0.92612} - Train F1 Score: {'f1': 0.9268080047553003} - Train Precision: {'precision': 0.9182567726737338} - Train Recall: {'recall': 0.93552} - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 125000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Train F1 Score | Train Precision | Train Recall | Epoch | |:----------:|:---------------:|:---------------------:|:--------------------------:|:---------------------------------:|:-------------------:|:-----:| | 0.2575 | 0.1900 | {'accuracy': 0.92612} | {'f1': 0.9268080047553003} | {'precision': 0.9182567726737338} | {'recall': 0.93552} | 0 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.1.0 - Tokenizers 0.13.3
Satish678/req2case_PROMPT_TUNING_CAUSAL_LM
Satish678
2023-08-07T13:36:26Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-07T13:36:24Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
nkpz/llama2-22b-empath-alpacagpt4
nkpz
2023-08-07T13:24:56Z
11
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-07T03:21:27Z
--- license: other --- Experimental: Created using an unofficial and unsupported method. I have no metrics on how this performs against 13b and I'm not planning on gathering any at this point. Still has weak spots that need work. https://huggingface.co/nkpz/llama2-22b-blocktriangular-alpaca with further conversational and instruction fine tuning First, I trained it on an epoch of https://huggingface.co/datasets/Adapting/empathetic_dialogues_v2 to give it a decent base knowledge of a casual chat style. I added some automated capitalization fixes for this data.The result was conversational, but not very smart. Then I trained it on an epoch of https://huggingface.co/datasets/vicgalle/alpaca-gpt4 and landed here, a model that is capable of chatting but very focused on following instructions. If you would like to run this in 4-bit, you can use the Hugging Face backend in Koboldai (or in a different script, the `load_in_4bit` kwarg when calling `from_pretrained`). GPTQ conversion has so far resulted in broken output for me, YMMV. **Future Ideas** - **This strongly prefers the alpaca prompt format and will try to autocomplete it if you don't provide it.** I'd like to work on removing this fixation and making it more flexible. - Also would like to filter the rows with phrases "AI assistant" and "virtual assistant" from all future runs. - Thinking it might also help to do a short run on a dataset focused on character impersonation **Prompting** Standard prompt format examples: ``` Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: List 3 ingredients for the following recipe. ### Input: Spaghetti Bolognese ### Response: ``` Or ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: List 3 ingredients for the following recipe: Spaghetti Bolognese ### Response: ``` For a chat session, I've had success using this simplified prompt: ``` ### Scenario You are speaking with Alexander Graham Bell ### Begin Chat (Format: [Person1]: [Message]\n[Person2]: [Message]) You: Hey, can you tell me a little bit about yourself? ``` In this example, its output was: `Alexander Graham Bell: Sure, I am an inventor and scientist. I'm most known for inventing the telephone.` You can customize the use of `### ` prefixed labels to create your own structure.
StofEzz/mascir_fr_wav2vec_version1000
StofEzz
2023-08-07T13:22:47Z
78
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-xls-r-300m", "base_model:finetune:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-31T07:58:52Z
--- license: apache-2.0 base_model: facebook/wav2vec2-xls-r-300m tags: - generated_from_trainer metrics: - wer model-index: - name: mascir_fr_wav2vec_version1000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mascir_fr_wav2vec_version1000 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4677 - Wer: 0.37 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.3896 | 4.0 | 500 | 3.0842 | 1.0 | | 1.6969 | 8.0 | 1000 | 0.6327 | 0.5956 | | 0.3056 | 12.0 | 1500 | 0.5024 | 0.49 | | 0.1441 | 16.0 | 2000 | 0.5241 | 0.45 | | 0.091 | 20.0 | 2500 | 0.4997 | 0.44 | | 0.0676 | 24.0 | 3000 | 0.5173 | 0.4456 | | 0.0603 | 28.0 | 3500 | 0.4487 | 0.4122 | | 0.0378 | 32.0 | 4000 | 0.4554 | 0.3933 | | 0.0328 | 36.0 | 4500 | 0.4395 | 0.3822 | | 0.0275 | 40.0 | 5000 | 0.4910 | 0.3889 | | 0.0198 | 44.0 | 5500 | 0.4861 | 0.3722 | | 0.019 | 48.0 | 6000 | 0.4677 | 0.37 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.3 - Tokenizers 0.13.3
Aspik101/WizardVicuna-Uncensored-3B-instruct-PL-lora_unload
Aspik101
2023-08-07T13:19:39Z
1,481
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "facebook", "meta", "llama-2", "pl", "dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-07T13:12:42Z
--- language: - pl datasets: - Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish license: other model_type: llama-2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 ---
Aspik101/WizardVicuna-Uncensored-3B-instruct-PL-lora_GGML
Aspik101
2023-08-07T13:12:42Z
0
4
null
[ "facebook", "meta", "pytorch", "llama", "llama-2", "text-generation", "pl", "dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish", "license:other", "region:us" ]
text-generation
2023-08-07T13:09:18Z
--- language: - pl datasets: - Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish license: other model_type: llama-2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 ---
slone/mbart-large-51-mul-myv-v1
slone
2023-08-07T13:11:52Z
117
0
transformers
[ "transformers", "pytorch", "safetensors", "mbart", "text2text-generation", "erzya", "mordovian", "translation", "myv", "ru", "fi", "de", "es", "en", "hi", "zh", "tr", "uk", "fr", "ar", "dataset:slone/myv_ru_2022", "dataset:yhavinga/ccmatrix", "arxiv:2209.09368", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-09-15T06:13:29Z
--- language: - myv - ru - fi - de - es - en - hi - zh - tr - uk - fr - ar tags: - erzya - mordovian - translation license: cc-by-sa-4.0 datasets: - slone/myv_ru_2022 - yhavinga/ccmatrix --- This a model to translate texts from the Erzya language (`myv`, cyrillic script) to 11 other languages: `ru,fi,de,es,en,hi,zh,tr,uk,fr,ar`. See its [demo](https://huggingface.co/spaces/slone/myv-translation-2022-demo)! It is described in the paper [The first neural machine translation system for the Erzya language](https://arxiv.org/abs/2209.09368). This model is based on [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50), but with updated vocabulary and checkpoint: - Added an extra language token `myv_XX` and 19K new BPE tokens for the Erzya language; - Fine-tuned to translate to Erzya: first from Russian, then from all 11 languages. The following code can be used to run translation using the model: ```Python from transformers import MBartForConditionalGeneration, MBart50Tokenizer def fix_tokenizer(tokenizer): """ Add a new language token to the tokenizer vocabulary (this should be done each time after its initialization) """ old_len = len(tokenizer) - int('myv_XX' in tokenizer.added_tokens_encoder) tokenizer.lang_code_to_id['myv_XX'] = old_len-1 tokenizer.id_to_lang_code[old_len-1] = 'myv_XX' tokenizer.fairseq_tokens_to_ids["<mask>"] = len(tokenizer.sp_model) + len(tokenizer.lang_code_to_id) + tokenizer.fairseq_offset tokenizer.fairseq_tokens_to_ids.update(tokenizer.lang_code_to_id) tokenizer.fairseq_ids_to_tokens = {v: k for k, v in tokenizer.fairseq_tokens_to_ids.items()} if 'myv_XX' not in tokenizer._additional_special_tokens: tokenizer._additional_special_tokens.append('myv_XX') tokenizer.added_tokens_encoder = {} def translate(text, model, tokenizer, src='ru_RU', trg='myv_XX', max_length='auto', num_beams=3, repetition_penalty=5.0, train_mode=False, n_out=None, **kwargs): tokenizer.src_lang = src encoded = tokenizer(text, return_tensors="pt", truncation=True, max_length=1024) if max_length == 'auto': max_length = int(32 + 1.5 * encoded.input_ids.shape[1]) if train_mode: model.train() else: model.eval() generated_tokens = model.generate( **encoded.to(model.device), forced_bos_token_id=tokenizer.lang_code_to_id[trg], max_length=max_length, num_beams=num_beams, repetition_penalty=repetition_penalty, num_return_sequences=n_out or 1, **kwargs ) out = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) if isinstance(text, str) and n_out is None: return out[0] return out mname = 'slone/mbart-large-51-mul-myv-v1' model = MBartForConditionalGeneration.from_pretrained(mname) tokenizer = MBart50Tokenizer.from_pretrained(mname) fix_tokenizer(tokenizer) print(translate('Привет, собака!', model, tokenizer, src='ru_RU', trg='myv_XX')) # Шумбрат, киска! # действительно, по-эрзянски собака именно так print(translate('Hello, doggy!', model, tokenizer, src='en_XX', trg='myv_XX')) # Шумбрат, киска! ```
kejolong/slipddress
kejolong
2023-08-07T13:08:27Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-08-07T12:59:51Z
--- license: creativeml-openrail-m ---
Aspik101/WizardVicuna-Uncensored-3B-instruct-PL-lora_adapter_model
Aspik101
2023-08-07T13:07:32Z
0
0
null
[ "facebook", "meta", "pytorch", "llama", "llama-2", "text-generation", "pl", "dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish", "license:other", "region:us" ]
text-generation
2023-08-07T13:07:30Z
--- language: - pl datasets: - Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish license: other model_type: llama-2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 ---
TheRains/yt-special-batch4-2lr5-small
TheRains
2023-08-07T13:07:15Z
117
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "dataset:yt", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-07T05:30:15Z
--- license: apache-2.0 base_model: openai/whisper-small tags: - whisper-event - generated_from_trainer datasets: - yt metrics: - wer model-index: - name: Whisper Small Indonesian results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: yt id type: yt metrics: - name: Wer type: wer value: 51.26775176707088 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Indonesian This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the yt id dataset. It achieves the following results on the evaluation set: - Loss: 0.7838 - Wer: 51.2678 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 1.134 | 0.09 | 1000 | 1.0407 | 97.9768 | | 0.8923 | 0.17 | 2000 | 0.9185 | 89.0539 | | 0.9713 | 0.26 | 3000 | 0.8536 | 58.9132 | | 0.7834 | 0.34 | 4000 | 0.7838 | 51.2678 | | 0.78 | 0.43 | 5000 | 0.7438 | 52.1951 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
stablediffusionapi/beautiful-realistic
stablediffusionapi
2023-08-07T13:06:33Z
22
3
diffusers
[ "diffusers", "stablediffusionapi.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-07T11:11:10Z
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # Beautiful Realistic Asians API Inference ![generated from stablediffusionapi.com](https://cdn.stablediffusionapi.com/generations/17820532521690966918.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "beautiful-realistic" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Try model for free: [Generate Images](https://stablediffusionapi.com/models/beautiful-realistic) Model link: [View model](https://stablediffusionapi.com/models/beautiful-realistic) Credits: [View credits](https://civitai.com/?query=Beautiful%20Realistic%20Asians) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v4/dreambooth" payload = json.dumps({ "key": "your_api_key", "model_id": "beautiful-realistic", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
brishtiteveja/llama-2-7b-guanaco-dolly-15k
brishtiteveja
2023-08-07T13:03:21Z
2
0
peft
[ "peft", "llama2", "finetuned", "llama2-finetuned", "dataset:databricks/databricks-dolly-15k", "region:us" ]
null
2023-08-07T13:01:04Z
--- library_name: peft datasets: - databricks/databricks-dolly-15k tags: - llama2 - finetuned - llama2-finetuned --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
weav-geng/llama2-qlora-finetuned-resume-v5
weav-geng
2023-08-07T12:59:24Z
1
0
peft
[ "peft", "region:us" ]
null
2023-08-07T12:57:53Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0.dev0
Q93WnX4FUHx2mJ/e5-multi-large-sbert
Q93WnX4FUHx2mJ
2023-08-07T12:59:15Z
17
0
sentence-transformers
[ "sentence-transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-08-07T12:34:44Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
AtilliO/Chopper_Sat_01
AtilliO
2023-08-07T12:57:25Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Heli", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Heli", "region:us" ]
reinforcement-learning
2023-08-07T12:54:38Z
--- library_name: ml-agents tags: - Heli - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Heli --- # **ppo** Agent playing **Heli** This is a trained model of a **ppo** agent playing **Heli** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: AtilliO/Chopper_Sat_01 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
tilyupo/t5-small-trivia-gpu-ca2q
tilyupo
2023-08-07T12:39:23Z
59
0
transformers
[ "transformers", "tf", "t5", "text2text-generation", "generated_from_keras_callback", "base_model:google/flan-t5-small", "base_model:finetune:google/flan-t5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-08-06T16:20:10Z
--- license: apache-2.0 base_model: google/flan-t5-small tags: - generated_from_keras_callback model-index: - name: t5-small-trivia-gpu-ca2q results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-trivia-gpu-ca2q This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.2675 - Validation Loss: 1.3898 - Epoch: 3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adafactor', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.00014285714, 'beta_2_decay': -0.8, 'epsilon_1': 1e-30, 'epsilon_2': 0.001, 'clip_threshold': 1.0, 'relative_step': False} - training_precision: mixed_bfloat16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.7429 | 1.4649 | 0 | | 1.4976 | 1.4196 | 1 | | 1.3663 | 1.3913 | 2 | | 1.2675 | 1.3898 | 3 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.13.0 - Datasets 2.14.3 - Tokenizers 0.13.3
FatemahAlsubaiei/AraELECTRA-CGSQuAD-QA-Model2
FatemahAlsubaiei
2023-08-07T12:11:35Z
236
0
transformers
[ "transformers", "pytorch", "electra", "question-answering", "ar", "dataset:FatemahAlsubaiei/CGSQuAD", "endpoints_compatible", "region:us" ]
question-answering
2023-08-07T11:52:43Z
--- datasets: - FatemahAlsubaiei/CGSQuAD language: - ar metrics: - f1 - exact_match library_name: transformers pipeline_tag: question-answering ---
flax-community/alberti-bert-base-multilingual-cased
flax-community
2023-08-07T12:10:54Z
19
6
transformers
[ "transformers", "pytorch", "jax", "joblib", "safetensors", "bert", "fill-mask", "multilingual", "es", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: es license: cc-by-4.0 tags: - multilingual - bert pipeline_tag: fill-mask widget: - text: ¿Qué es la vida? Un [MASK]. --- <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><b>Update:</b> This model has been moved to <a href="https://huggingface.co/linhd-postdata/alberti-bert-base-multilingual-cased">linhd-postdata/alberti-bert-base-multilingual-cased</a>, where it will be maintained and updated. </p> </div> # ALBERTI ALBERTI is a set of two BERT-based multilingual model for poetry. One for verses and another one for stanzas. This model has been further trained with the PULPO corpus for verses using [Flax](https://github.com/google/flax), including training scripts. This is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google. ## PULPO PULPO, the Prodigious Unannotated Literary Poetry Corpus, is a set of multilingual corpora of verses and stanzas with over 95M words. The following corpora has been downloaded using the [Averell](https://github.com/linhd-postdata/averell/) tool, developed by the [POSTDATA](https://postdata.linhd.uned.es/) team: ### Spanish - [Disco v3](https://github.com/pruizf/disco) - [Corpus of Spanish Golden-Age Sonnets](https://github.com/bncolorado/CorpusSonetosSigloDeOro) - [Corpus general de poesía lírica castellana del Siglo de Oro](https://github.com/bncolorado/CorpusGeneralPoesiaLiricaCastellanaDelSigloDeOro) - [Gongocorpus](https://github.com/linhd-postdata/gongocorpus) - [source](http://obvil.sorbonne-universite.site/corpus/gongora/gongora_obra-poetica) ### English - [Eighteenth-Century Poetry Archive (ECPA)](https://github.com/alhuber1502/ECPA) - [For better for verse](https://github.com/waynegraham/for_better_for_verse) ### French - [Métrique en Ligne](https://crisco2.unicaen.fr/verlaine/index.php?navigation=accueil) - [source](https://github.com/linhd-postdata/metrique-en-ligne) ### Italian - [Biblioteca italiana](https://github.com/linhd-postdata/biblioteca_italiana) - [source](http://www.bibliotecaitaliana.it/) ### Czech - [Corpus of Czech Verse](https://github.com/versotym/corpusCzechVerse) ### Portuguese - [Stichotheque](https://gitlab.com/stichotheque/stichotheque-pt) Also, we obtained the following corpora from these sources: ### Spanish - [Poesi.as](https://github.com/linhd-postdata/poesi.as) - [source](http://www.poesi.as/) ### English - [A Gutenberg Poetry Corpus](https://github.com/aparrish/gutenberg-poetry-corpus) ### Arabic - [Arabic Poetry dataset](https://www.kaggle.com/ahmedabelal/arabic-poetry) ### Chinese - [THU Chinese Classical Poetry Corpus](https://github.com/THUNLP-AIPoet/Datasets/tree/master/CCPC) ### Finnish - [SKVR](https://github.com/sks190/SKVR) ### German - [TextGrid Poetry Corpus](https://github.com/linhd-postdata/textgrid-poetry) - [source](https://textgrid.de/en/digitale-bibliothek) - [German Rhyme Corpus](https://github.com/tnhaider/german-rhyme-corpus) ### Hungarian - [verskorpusz](https://github.com/ELTE-DH/verskorpusz) ### Portuguese - [Poems in Portuguese](https://www.kaggle.com/oliveirasp6/poems-in-portuguese) ### Russian - [19 000 Russian poems](https://www.kaggle.com/grafstor/19-000-russian-poems) ## Team members - Álvaro Pérez ([alvp](https://huggingface.co/alvp)) - Javier de la Rosa ([versae](https://huggingface.co/versae)) - Aitor Díaz ([aitordiaz](https://huggingface.co/aitordiaz)) - Elena González-Blanco - Salvador Ros ([salva](https://huggingface.co/salva)) ## Useful links - [Community Week timeline](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104#summary-timeline-calendar-6) - [Community Week README](https://github.com/huggingface/transformers/blob/master/examples/research_projects/jax-projects/README.md) - [Community Week thread](https://discuss.huggingface.co/t/bertin-pretrain-roberta-large-from-scratch-in-spanish/7125) - [Community Week channel](https://discord.com/channels/858019234139602994/859113060068229190) - [Masked Language Modelling example scripts](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling) - [Model Repository](https://huggingface.co/flax-community/alberti-bert-base-multilingual-cased/) ## Acknowledgments This project would not have been possible without the infrastructure and resources provided by HuggingFace and Google Cloud. Moreover, we want to thank POSTDATA Project (ERC-StG-679528) and the Computational Literary Studies Infrastructure (CLS INFRA No. 101004984) of the European Union's Horizon 2020 research and innovation programme for their support and time allowance.
jakobkruse/ppo-Huggy
jakobkruse
2023-08-07T12:07:54Z
10
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-08-07T12:07:49Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: jakobkruse/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
AlexDr/model1
AlexDr
2023-08-07T11:58:45Z
0
0
adapter-transformers
[ "adapter-transformers", "legal", "climate", "uk", "dataset:Anthropic/hh-rlhf", "dataset:Open-Orca/OpenOrca", "license:apache-2.0", "region:us" ]
null
2023-08-07T11:55:48Z
--- license: apache-2.0 datasets: - Anthropic/hh-rlhf - Open-Orca/OpenOrca language: - uk metrics: - bertscore - accuracy library_name: adapter-transformers tags: - legal - climate ---
dc-at-hf/lunarlander
dc-at-hf
2023-08-07T11:50:43Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-07T11:50:23Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 261.32 +/- 22.25 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
dmitrijsk/Bloomz_marketing_tutorial
dmitrijsk
2023-08-07T11:39:13Z
1
0
peft
[ "peft", "region:us" ]
null
2023-08-07T11:39:08Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0.dev0
starboi/bloomz-560m_PROMPT_TUNING_CAUSAL_LM_ENV
starboi
2023-08-07T11:37:38Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-07T11:37:37Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
a1nkit/distilhubert-finetuned-gtzan
a1nkit
2023-08-07T11:32:46Z
160
0
transformers
[ "transformers", "pytorch", "tensorboard", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:ntu-spml/distilhubert", "base_model:finetune:ntu-spml/distilhubert", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2023-08-01T02:06:34Z
--- license: apache-2.0 base_model: ntu-spml/distilhubert tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.85 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-gtzan This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.6477 - Accuracy: 0.85 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 12 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.1618 | 1.0 | 75 | 2.0497 | 0.36 | | 1.5327 | 2.0 | 150 | 1.4568 | 0.62 | | 1.1622 | 3.0 | 225 | 1.1626 | 0.66 | | 0.849 | 4.0 | 300 | 0.9894 | 0.74 | | 0.6072 | 5.0 | 375 | 0.8128 | 0.75 | | 0.4014 | 6.0 | 450 | 0.7118 | 0.79 | | 0.3285 | 7.0 | 525 | 0.7482 | 0.83 | | 0.3074 | 8.0 | 600 | 0.5633 | 0.85 | | 0.242 | 9.0 | 675 | 0.6613 | 0.82 | | 0.069 | 10.0 | 750 | 0.5173 | 0.85 | | 0.1281 | 11.0 | 825 | 0.6102 | 0.83 | | 0.0334 | 12.0 | 900 | 0.5990 | 0.84 | | 0.0307 | 13.0 | 975 | 0.6227 | 0.86 | | 0.0339 | 14.0 | 1050 | 0.6331 | 0.85 | | 0.0239 | 15.0 | 1125 | 0.6477 | 0.85 | ### Framework versions - Transformers 4.32.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.2 - Tokenizers 0.13.3
BabaYaga048/MC_Reinforce
BabaYaga048
2023-08-07T11:24:01Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-08-07T11:23:50Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: MC_Reinforce results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
stablediffusionapi/l5C32ZWSMrwiIfRnEhbAYOk6T
stablediffusionapi
2023-08-07T11:22:48Z
17
1
diffusers
[ "diffusers", "stablediffusionapi.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-07T10:10:04Z
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # API Inference ![generated from stablediffusionapi.com](https://pub-8b49af329fae499aa563997f5d4068a4.r2.dev/generations/l5C32ZWSMrwiIfRnEhbAYOk6T.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "l5C32ZWSMrwiIfRnEhbAYOk6T" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Try model for free: [Generate Images](https://stablediffusionapi.com/models/l5C32ZWSMrwiIfRnEhbAYOk6T) Model link: [View model](https://stablediffusionapi.com/models/l5C32ZWSMrwiIfRnEhbAYOk6T) Credits: [View credits](https://civitai.com/?query=) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v4/dreambooth" payload = json.dumps({ "key": "your_api_key", "model_id": "l5C32ZWSMrwiIfRnEhbAYOk6T", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
Evan-Lin/Bart-abs-yelp-allure-rouge
Evan-Lin
2023-08-07T11:18:31Z
47
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "trl", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-08-07T04:58:35Z
--- license: apache-2.0 tags: - trl - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="Evan-Lin//tmp/tmpv_lsdew_/Evan-Lin/Bart-abs-yelp-allure-rouge") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmpv_lsdew_/Evan-Lin/Bart-abs-yelp-allure-rouge") model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmpv_lsdew_/Evan-Lin/Bart-abs-yelp-allure-rouge") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
Yntec/samaritan3dCartoon2MVAE
Yntec
2023-08-07T11:14:21Z
487
3
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "3D", "art", "style", "checkpoint", "PromptSharingSamaritan", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-07T10:18:54Z
--- license: creativeml-openrail-m language: - en library_name: diffusers pipeline_tag: text-to-image thumbnail: https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/0MGXGAxBpd-qDBWPYnWhR.png tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - 3D - art - style - checkpoint - PromptSharingSamaritan - diffusers inference: true --- # samaritan 3d Cartoon 2 This model with the MoistMix VAE baked in. Previews and prompt: ![sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/5xHgcpT11v-EEo12rSM8u.png) ![sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/0MGXGAxBpd-qDBWPYnWhR.png) (lora)0.5 , (amakawa hano)0.5 , 1 girl, ray tracing, {best quality}, {{masterpiece}}, {highres}, original, extremely detailed 8K wallpaper, {an extremely delicate and beautiful}, , incredibly_absurdres, colorful, intricate detail, artbook Original pages: https://civitai.com/models/81270?modelVersionId=113299 https://civitai.com/api/download/models/14459?type=VAE
Liea/q-Taxi-v3-set_eval_seed
Liea
2023-08-07T11:12:00Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-08-07T11:11:56Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3-set_eval_seed results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage model = load_from_hub(repo_id="Liea/q-Taxi-v3-set_eval_seed", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
FatemahAlsubaiei/AraELECTRA-CGSQuAD-QA-Model1
FatemahAlsubaiei
2023-08-07T11:11:02Z
236
0
transformers
[ "transformers", "pytorch", "electra", "question-answering", "ar", "dataset:FatemahAlsubaiei/CGSQuAD", "endpoints_compatible", "region:us" ]
question-answering
2023-08-07T11:00:51Z
--- datasets: - FatemahAlsubaiei/CGSQuAD language: - ar metrics: - f1 - exact_match library_name: transformers pipeline_tag: question-answering ---
morell23/misellia
morell23
2023-08-07T11:06:10Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-08-07T11:00:11Z
--- license: creativeml-openrail-m ---
Danielwei0214/guwenbert-base-ched-event_detection
Danielwei0214
2023-08-07T10:44:50Z
139
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "classical chinese", "ancient chinese", "event detection", "zh", "dataset:Danielwei0214/CHED_Event_Detection", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-08-01T16:12:04Z
--- datasets: - Danielwei0214/CHED_Event_Detection language: - zh metrics: - accuracy pipeline_tag: token-classification tags: - classical chinese - ancient chinese - event detection widget: - text: 庄襄王为秦质子于赵,见吕不韦姬,悦而取之,生始皇。 - text: 丁巳,克安州,承裕奔于云梦,全节执而杀之。 - text: 夏四月,帝次龙德,拔德顺等州,德顺节度使爱申、进士马肩龙死焉。 ---
prantik-s/realistic_vision
prantik-s
2023-08-07T10:42:05Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-08-07T10:34:55Z
--- license: creativeml-openrail-m ---
AtilliO/chopper_05
AtilliO
2023-08-07T10:37:53Z
13
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Heli", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Heli", "region:us" ]
reinforcement-learning
2023-08-07T10:37:44Z
--- library_name: ml-agents tags: - Heli - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Heli --- # **ppo** Agent playing **Heli** This is a trained model of a **ppo** agent playing **Heli** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: AtilliO/chopper_05 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
yoonlee/model
yoonlee
2023-08-07T10:35:26Z
0
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-07T08:36:24Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of sks cat tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - yoonlee/model This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks cat using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: True.
taehoon1lee/Reinforce-unit4
taehoon1lee
2023-08-07T10:32:41Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-08-07T10:32:31Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-unit4 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Lilsunx/falconsun
Lilsunx
2023-08-07T10:15:07Z
2
0
peft
[ "peft", "region:us" ]
null
2023-08-07T10:14:18Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
lylogummy/BunnyIlly
lylogummy
2023-08-07T10:05:10Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-08-07T10:01:17Z
--- license: creativeml-openrail-m --- --WIP-- https://civitai.com/models/124276?modelVersionId=135662
Junlaii/wiki_LSTM_fintune_final
Junlaii
2023-08-07T10:02:24Z
0
0
fastai
[ "fastai", "region:us" ]
null
2023-08-07T10:02:17Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
hihellooo2024/opt-350m-lora-1024
hihellooo2024
2023-08-07T09:56:33Z
2
0
peft
[ "peft", "region:us" ]
null
2023-08-07T09:56:32Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
divya9103/llama2-divya
divya9103
2023-08-07T09:43:44Z
0
0
peft
[ "peft", "pytorch", "llama", "region:us" ]
null
2023-08-07T07:07:07Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
thi-doan/distilbert-base-uncased-finetuned-emotion-dupe
thi-doan
2023-08-07T09:42:41Z
108
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-07T02:59:07Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion-dupe results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion-dupe This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2190 - Accuracy: 0.9245 - F1: 0.9245 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8335 | 1.0 | 250 | 0.3207 | 0.905 | 0.9035 | | 0.2537 | 2.0 | 500 | 0.2190 | 0.9245 | 0.9245 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Tokenizers 0.13.3
abhibarman/bloomz-560m_PROMPT_TUNING_CAUSAL_LM
abhibarman
2023-08-07T09:40:49Z
2
0
peft
[ "peft", "region:us" ]
null
2023-08-07T09:40:48Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
akdeniz27/LunarLander-v2
akdeniz27
2023-08-07T09:13:22Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-08-07T09:13:17Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -235.91 +/- 123.58 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters
lalitrajput/distilbert-base-uncased-finetuned-squad
lalitrajput
2023-08-07T09:04:09Z
116
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-08-07T07:25:32Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 2 | 5.5896 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.3 - Tokenizers 0.13.3
Shekhar2681/YEAR-1
Shekhar2681
2023-08-07T09:03:37Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2023-08-07T09:03:37Z
--- license: bigscience-bloom-rail-1.0 ---
muhtasham/TajBERTo
muhtasham
2023-08-07T09:02:29Z
178
4
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "tg", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - tg widget: - text: "Пойтахти <mask> Душанбе" - text: "<mask> ба ин сайти шумо медароям." - text: "Номи ман Акрам <mask>" tags: - generated_from_trainer model_index: - name: TajBERTo results: - task: name: Masked Language Modeling type: fill-mask --- # TajBERTo: RoBERTa-like Language model trained on Tajik ## First ever Tajik NLP model 🔥 # Dataset: # This model was trained on filtered and merged version of Leipzig Corpora https://wortschatz.unileipzig.de/en/download/Tajik ## Intended use # You can use the raw model for masked text generation or fine-tune it to a downstream task. ## Example pipeline ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="muhtasham/TajBERTo", tokenizer="muhtasham/TajBERTo" ) fill_mask("Пойтахти <mask> Душанбе") # This is the beginning of a beautiful <mask>. {'score': 0.1952248513698578, 'sequence': 'Пойтахти шаҳри Душанбе', 'token': 710, 'token_str': ' шаҳри'}, {'score': 0.029092855751514435, 'sequence': 'Пойтахти дар Душанбе', 'token': 310, 'token_str': ' дар'}, {'score': 0.020065447315573692, 'sequence': 'Пойтахти Душанбе Душанбе', 'token': 717, 'token_str': ' Душанбе'}, {'score': 0.016725927591323853, 'sequence': 'Пойтахти Тоҷикистон Душанбе', 'token': 424, 'token_str': ' Тоҷикистон'}, {'score': 0.011400512419641018, 'sequence': 'Пойтахти аз Душанбе', 'token': 335, 'token_str': ' аз'} ```
tkathuria/finetuning-emotion-model-12000-samples
tkathuria
2023-08-07T09:01:38Z
110
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-07T08:49:35Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: finetuning-emotion-model-12000-samples results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: test args: split metrics: - name: Accuracy type: accuracy value: 0.92 - name: F1 type: f1 value: 0.920048011482891 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-emotion-model-12000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2588 - Accuracy: 0.92 - F1: 0.9200 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.30.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.3 - Tokenizers 0.13.3
kasrahabib/KM45L6V2OC
kasrahabib
2023-08-07T08:59:42Z
76
1
transformers
[ "transformers", "tf", "tensorboard", "bert", "text-classification", "generated_from_keras_callback", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-03T13:03:47Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: kasrahabib/KM45L6V2OC results: [] language: - en widget: - text: "The START NEW PROJECT function shall allow the user to create a new project." example_title: "Requirment 1" - text: "The email string consists of x@x.x and is less than 31 characters in length and is not empty." example_title: "Requirment 2" --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # kasrahabib/KM45L6V2OC This model is a fine-tuned version of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2), for classifying softwrae requirments into functional (F) and Non-functional (NF) types, on Software Requirements Dataset (SWARD). It achieves the following results on the evaluation set: - Train Loss: 0.0107 - Validation Loss: 0.0404 - Epoch: 14 - Final Macro F1-score: 0.99 <b>Labels</b>: 0 or F -> Functional; 1 or NF -> Non-functional; ## Usage Pipeline ```python from transformers import pipeline frame_work = 'tf' task = 'text-classification' model_ckpt = 'kasrahabib/KM45L6V2OC' software_requirment_cls = pipeline(task = task, model = model_ckpt, framework = frame_work) example_1_f = 'The START NEW PROJECT function shall allow the user to create a new project.' example_2_nf = 'The email string consists of x@x.x and is less than 31 characters in length and is not empty.' software_requirment_cls([example_1_f, example_2_nf]) ``` ``` [{'label': 'F', 'score': 0.9998922348022461}, {'label': 'NF', 'score': 0.999846339225769}] ``` ## Model Inference: ```python import numpy as np from transformers import AutoTokenizer, TFAutoModelForSequenceClassification model_ckpt = 'kasrahabib/KM45L6V2OC' tokenizer = AutoTokenizer.from_pretrained(model_ckpt) model = TFAutoModelForSequenceClassification.from_pretrained(model_ckpt) example_1_f = 'The START NEW PROJECT function shall allow the user to create a new project.' example_2_nf = 'The email string consists of x@x.x and is less than 31 characters in length and is not empty.' requirements = [example_1_f, example_2_nf] encoded_requirements = tokenizer(requirements, return_tensors = 'np', padding = 'longest') y_pred = model(encoded_requirements).logits classifications = np.argmax(y_pred, axis = 1) classifications = [model.config.id2label[output] for output in classifications] print(classifications) ``` ``` ['F', 'NF'] ``` ## Usage Locally Downloaded (e.g., GitHub): 1 - Clone the repository: ```shell git lfs install git clone url_of_repo ``` 2 - Locate the path to the downloaded directory <br> 3 - Write the link to the path in the ```model_ckpt``` variable <br> Then modify the code as below: ```python import numpy as np from transformers import AutoTokenizer, TFAutoModelForSequenceClassification model_ckpt = 'rest_of_the_path/KM45L6V2OC' tokenizer = AutoTokenizer.from_pretrained(model_ckpt) model = TFAutoModelForSequenceClassification.from_pretrained(model_ckpt) example_1_f = 'The START NEW PROJECT function shall allow the user to create a new project.' example_2_nf = 'The email string consists of x@x.x and is less than 31 characters in length and is not empty.' requirements = [example_1_f, example_2_nf] encoded_requirements = tokenizer(requirements, return_tensors = 'np', padding = 'longest') y_pred = model(encoded_requirements).logits classifications = np.argmax(y_pred, axis = 1) classifications = [model.config.id2label[output] for output in classifications] print(classifications) ``` ``` [{'label': 'F', 'score': 0.9998922348022461}, {'label': 'NF', 'score': 0.999846339225769}] ``` ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 9030, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Framework versions - Transformers 4.26.1 - TensorFlow 2.11.0 - Datasets 2.10.1 - Tokenizers 0.13.2
Dangurangu/distilbert-base-uncased-finetuned-sentiment
Dangurangu
2023-08-07T08:55:47Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-07T08:36:56Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - tweet_eval metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-sentiment results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval config: sentiment split: validation args: sentiment metrics: - name: Accuracy type: accuracy value: 0.7285 - name: F1 type: f1 value: 0.7289390753190282 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sentiment This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.6208 - Accuracy: 0.7285 - F1: 0.7289 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.6922 | 1.0 | 713 | 0.6267 | 0.7195 | 0.7208 | | 0.5571 | 2.0 | 1426 | 0.6208 | 0.7285 | 0.7289 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.3 - Tokenizers 0.13.3
TheRains/yt-special-batch12-base
TheRains
2023-08-07T08:54:52Z
113
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "dataset:yt", "base_model:openai/whisper-base", "base_model:finetune:openai/whisper-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-07T07:37:18Z
--- license: apache-2.0 base_model: openai/whisper-base tags: - whisper-event - generated_from_trainer datasets: - yt metrics: - wer model-index: - name: Whisper Small Indonesian results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: yt id type: yt metrics: - name: Wer type: wer value: 55.89780169898191 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Indonesian This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the yt id dataset. It achieves the following results on the evaluation set: - Loss: 0.9330 - Wer: 55.8978 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 12 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 1.0995 | 0.26 | 1000 | 1.1249 | 91.3559 | | 0.9995 | 0.52 | 2000 | 1.0126 | 68.1344 | | 0.9872 | 0.77 | 3000 | 0.9620 | 65.9425 | | 0.7043 | 1.03 | 4000 | 0.9330 | 55.8978 | | 0.7292 | 1.29 | 5000 | 0.9224 | 62.5057 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
IHaveNoClueAndIMustPost/llama2-22B-GPLATTY-GGML
IHaveNoClueAndIMustPost
2023-08-07T08:43:16Z
0
0
null
[ "llama", "llama-2", "license:other", "region:us" ]
null
2023-08-07T05:27:21Z
--- license: other tags: - llama - llama-2 --- A couple of GGML conversions of [llama2-22B-GPLATTY](https://huggingface.co/grimpep/llama2-22B-GPLATTY) by [grimpep](https://huggingface.co/grimpep) <br>From my testing the stop token is "\</s\>". You may need to add this manually in SillyTavern in the model refuses to stop jabbering.
Aspik101/llama-30b-2048-instruct-PL-lora_unload
Aspik101
2023-08-07T08:37:07Z
1,484
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "facebook", "meta", "llama-2", "pl", "dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-07T08:17:15Z
--- language: - pl datasets: - Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish license: other model_type: llama-2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 ---
TheRains/yt-special-batch12-tiny
TheRains
2023-08-07T08:33:28Z
117
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "dataset:yt", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-07T07:37:28Z
--- license: apache-2.0 base_model: openai/whisper-tiny tags: - whisper-event - generated_from_trainer datasets: - yt metrics: - wer model-index: - name: Whisper Small Indonesian results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: yt id type: yt metrics: - name: Wer type: wer value: 71.27942416185721 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Indonesian This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the yt id dataset. It achieves the following results on the evaluation set: - Loss: 1.1267 - Wer: 71.2794 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 12 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.375 | 0.26 | 1000 | 1.3639 | 103.1969 | | 1.2229 | 0.52 | 2000 | 1.2348 | 81.9791 | | 1.2384 | 0.77 | 3000 | 1.1719 | 87.5041 | | 0.9738 | 1.03 | 4000 | 1.1389 | 71.3832 | | 0.9485 | 1.29 | 5000 | 1.1267 | 71.2794 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
RoversX/StableBeluga-7B-Qlora-Samantha-Zh-V1
RoversX
2023-08-07T08:32:54Z
8
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "zh", "en", "dataset:ehartford/samantha-data", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-08-03T13:38:43Z
--- datasets: - ehartford/samantha-data language: - zh - en pipeline_tag: text-generation --- # StableBeluga-7B-Qlora-Samantha-Zh-V1 StableBeluga-7B-Qlora-Samantha-Zh-V1 is a conversational AI assistant base on [stabilityai/StableBeluga-7B](https://huggingface.co/stabilityai/StableBeluga-7B) and trained on the samantha-1.1-zh dataset from [ehartford/samantha-data](https://huggingface.co/datasets/ehartford/samantha-data). ## Model Details ![Train](https://ucarecdn.com/11ea8fe5-322c-41a7-a7cf-36de196f3421/) Stable Beluga 7B should be used with this prompt format: ``` ### System: This is a system prompt, please behave and help the user. ### User: Your prompt here ### Assistant: The output of Stable Beluga 7B ```
bennyguo/zero123-diffusers
bennyguo
2023-08-07T08:31:53Z
150
4
diffusers
[ "diffusers", "safetensors", "arxiv:2303.11328", "license:mit", "diffusers:Zero123Pipeline", "region:us" ]
null
2023-08-01T08:38:23Z
--- license: mit --- # Uses _Note: This section is originally taken from the [Stable Diffusion v2 model card](https://huggingface.co/stabilityai/stable-diffusion-2), but applies in the same way to Zero-1-to-3._ ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include: - Safe deployment of large-scale models. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism. - The model cannot render legible text. - Faces and people in general may not be parsed or generated properly. - The autoencoding part of the model is lossy. - Stable Diffusion was trained on a subset of the large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, Stability AI has filtered the dataset using LAION's NSFW detector. - Zero-1-to-3 was subsequently finetuned on a subset of the large-scale dataset [Objaverse](https://objaverse.allenai.org/), which might also potentially contain inappropriate content. To partially mitigate this, our demo applies a safety check to every uploaded image. ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are limited to English descriptions. Images and concepts from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as Western cultures are often overrepresented. Stable Diffusion mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent. ### Safety Module The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers. This checker works by checking model inputs against known hard-coded NSFW concepts. Specifically, the checker compares the class probability of harmful concepts in the embedding space of the uploaded input images. The concepts are passed into the model with the image and compared to a hand-engineered weight for each NSFW concept. ## Citation ``` @misc{liu2023zero1to3, title={Zero-1-to-3: Zero-shot One Image to 3D Object}, author={Ruoshi Liu and Rundi Wu and Basile Van Hoorick and Pavel Tokmakov and Sergey Zakharov and Carl Vondrick}, year={2023}, eprint={2303.11328}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
noahkln/vicuna-13b-v1.5-16k-no-cache
noahkln
2023-08-07T08:28:52Z
5
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2307.09288", "arxiv:2306.05685", "license:llama2", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-08-06T18:52:24Z
--- inference: false license: llama2 --- **Note:** This is a preview version. A slightly better checkpoint will be uploaded soon. # Vicuna Model Card ## Model Details Vicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT. - **Developed by:** [LMSYS](https://lmsys.org/) - **Model type:** An auto-regressive language model based on the transformer architecture - **License:** Llama 2 Community License Agreement - **Finetuned from model:** [Llama 2](https://arxiv.org/abs/2307.09288) ### Model Sources - **Repository:** https://github.com/lm-sys/FastChat - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/ - **Paper:** https://arxiv.org/abs/2306.05685 - **Demo:** https://chat.lmsys.org/ ## Uses The primary use of Vicuna is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## How to Get Started with the Model - Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights - APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api ## Training Details Vicuna v1.5 (16k) is fine-tuned from Llama 2 with supervised instruction fine-tuning and linear RoPE scaling. The training data is around 125K conversations collected from ShareGPT.com. These conversations are packed into sequences that contain 16K tokens each. See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf). ## Evaluation ![Evaluation Results](https://github.com/lm-sys/lm-sys.github.io/blob/main/public/images/webdata/vicuna_v1.5_eval.png?raw=true) Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard). ## Difference between different versions of Vicuna See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
oottka/roberta-large-lora-token-classification
oottka
2023-08-07T08:26:36Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-07T08:26:34Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
Aspik101/llama-30b-2048-instruct-PL-lora_GGML
Aspik101
2023-08-07T08:17:14Z
0
0
null
[ "facebook", "meta", "pytorch", "llama", "llama-2", "text-generation", "pl", "dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish", "license:other", "region:us" ]
text-generation
2023-08-07T07:39:33Z
--- language: - pl datasets: - Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish license: other model_type: llama-2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 ---
jakobkruse/ppo-LunarLander-v2
jakobkruse
2023-08-07T07:51:28Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-08-07T07:51:05Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 242.24 +/- 51.56 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
edures/ppo_implementation-LunarLander-v2
edures
2023-08-07T07:51:05Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-08-07T07:50:58Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -188.83 +/- 105.39 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'edures/ppo_implementation-LunarLander-v2' 'batch_size': 512 'minibatch_size': 128} ```
LuizNeves/DeBERTa-v3-large-mnli-fever-anli-ling-wanli-vaccine
LuizNeves
2023-08-07T07:46:50Z
106
0
transformers
[ "transformers", "pytorch", "safetensors", "deberta-v2", "text-classification", "zero-shot-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
2023-08-04T09:49:25Z
--- pipeline_tag: zero-shot-classification ---
vishnu-vs/llama
vishnu-vs
2023-08-07T07:43:52Z
5
0
transformers
[ "transformers", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-2", "en", "license:other", "autotrain_compatible", "region:us" ]
text-generation
2023-08-07T07:37:21Z
--- inference: false language: - en license: other model_type: llama pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Meta's Llama 2 13B-chat GPTQ These files are GPTQ model files for [Meta's Llama 2 13B-chat](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. ## Repositories available * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-13B-chat-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML) * [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-13B-chat-hf) ## Prompt template: Llama-2-Chat ``` [INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> {prompt} [/INST] ``` To continue a conversation: ``` [INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> {prompt} [/INST] {model_reply} [INST] {prompt} [/INST] ``` ## Provided files Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description | | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- | | main | 4 | 128 | False | 7.26 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. | | gptq-4bit-32g-actorder_True | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. | | gptq-4bit-64g-actorder_True | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. | | gptq-4bit-128g-actorder_True | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. | | gptq-8bit-128g-actorder_True | 8 | 128 | True | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. | | gptq-8bit-64g-actorder_True | 8 | 64 | True | 13.95 GB | False | AutoGPTQ | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. | | gptq-8bit-128g-actorder_False | 8 | 128 | False | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. | | gptq-8bit--1g-actorder_True | 8 | None | True | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. | ## How to download from branches - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Llama-2-13B-chat-GPTQ:gptq-4bit-32g-actorder_True` - With Git, you can clone a branch with: ``` git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Llama-2-13B-chat-GPTQ` ``` - In Python Transformers code, the branch is the `revision` parameter; see below. ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Llama-2-13B-chat-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Llama-2-13B-chat-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done" 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Llama-2-13B-chat-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! ## How to use this GPTQ model from Python code First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed: `GITHUB_ACTIONS=true pip install auto-gptq` Then try the following example code: ```python from transformers import AutoTokenizer, pipeline, logging from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig model_name_or_path = "TheBloke/Llama-2-13B-chat-GPTQ" model_basename = "gptq_model-4bit-128g" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) model = AutoGPTQForCausalLM.from_quantized(model_name_or_path, model_basename=model_basename, use_safetensors=True, trust_remote_code=True, device="cuda:0", use_triton=use_triton, quantize_config=None) """ To download from a specific branch, use the revision parameter, as in this example: model = AutoGPTQForCausalLM.from_quantized(model_name_or_path, revision="gptq-4bit-32g-actorder_True", model_basename=model_basename, use_safetensors=True, trust_remote_code=True, device="cuda:0", quantize_config=None) """ prompt = "Tell me about AI" system_message = "You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information." prompt_template=f'''[INST] <<SYS>> {system_message} <</SYS>> {prompt} [/INST]''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline # Prevent printing spurious transformers error when using pipeline with AutoGPTQ logging.set_verbosity(logging.CRITICAL) print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Compatibility The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork. ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz. **Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Meta's Llama 2 13B-chat # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-1
kyleeasterly
2023-08-07T07:41:26Z
5
0
peft
[ "peft", "region:us" ]
null
2023-08-07T07:40:50Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
VinayHajare/distilhubert-finetuned-gtzan
VinayHajare
2023-08-07T07:40:35Z
176
1
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:ntu-spml/distilhubert", "base_model:finetune:ntu-spml/distilhubert", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2023-07-29T16:11:35Z
--- license: apache-2.0 base_model: ntu-spml/distilhubert tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.89 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-gtzan This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.5167 - Accuracy: 0.89 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.2163 | 1.0 | 113 | 2.0720 | 0.34 | | 1.7237 | 2.0 | 226 | 1.5361 | 0.59 | | 1.3254 | 3.0 | 339 | 1.2044 | 0.65 | | 1.0757 | 4.0 | 452 | 1.0578 | 0.66 | | 1.0683 | 5.0 | 565 | 0.8947 | 0.78 | | 0.9307 | 6.0 | 678 | 0.7716 | 0.82 | | 1.0313 | 7.0 | 791 | 0.7210 | 0.82 | | 0.6988 | 8.0 | 904 | 0.6506 | 0.8 | | 0.8053 | 9.0 | 1017 | 0.5944 | 0.81 | | 0.6243 | 10.0 | 1130 | 0.5637 | 0.87 | | 0.6238 | 11.0 | 1243 | 0.5212 | 0.89 | | 0.4493 | 12.0 | 1356 | 0.5167 | 0.89 | ### Framework versions - Transformers 4.32.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.2 - Tokenizers 0.13.3
leahsuperb/q-Taxi-v3
leahsuperb
2023-08-07T07:40:12Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-08-07T07:40:10Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.69 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="leahsuperb/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Aspik101/llama-30b-2048-instruct-PL-lora_adapter_model
Aspik101
2023-08-07T07:39:33Z
0
0
null
[ "facebook", "meta", "pytorch", "llama", "llama-2", "text-generation", "pl", "dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish", "license:other", "region:us" ]
text-generation
2023-08-07T07:38:42Z
--- language: - pl datasets: - Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish license: other model_type: llama-2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 ---
OpenBuddy/openbuddy-atom-13b-v9-bf16
OpenBuddy
2023-08-07T07:36:36Z
1,535
5
transformers
[ "transformers", "pytorch", "llama", "text-generation", "zh", "en", "fr", "de", "ja", "ko", "it", "ru", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-08-05T12:39:39Z
--- language: - zh - en - fr - de - ja - ko - it - ru pipeline_tag: text-generation inference: false library_name: transformers license: apache-2.0 --- # OpenBuddy - Open Multilingual Chatbot GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy) Website and Demo: [https://openbuddy.ai](https://openbuddy.ai) ![Demo](https://raw.githubusercontent.com/OpenBuddy/OpenBuddy/main/media/demo.png) # Copyright Notice This model is built upon https://huggingface.co/AtomEchoAI/AtomGPT_56k , License: Apache 2.0. ## Disclaimer All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. ## 免责声明 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
tkathuria/finetuning-emotion-model-3000-samples
tkathuria
2023-08-07T07:34:57Z
109
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-16T07:11:41Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion model-index: - name: finetuning-emotion-model-12000-samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-emotion-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.30.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.3 - Tokenizers 0.13.3
kyleeasterly/openllama-7b_purple-aerospace-v2-200-10
kyleeasterly
2023-08-07T07:31:52Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-07T07:31:18Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
anubhav10/xlm-roberta-base-finetuned-panx-de
anubhav10
2023-08-07T07:27:52Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-08-07T07:15:46Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.de split: validation args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8616659101225601 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1329 - F1: 0.8617 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2568 | 1.0 | 525 | 0.1583 | 0.8125 | | 0.1261 | 2.0 | 1050 | 0.1458 | 0.8473 | | 0.0823 | 3.0 | 1575 | 0.1329 | 0.8617 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.3 - Tokenizers 0.13.3
vishnun/codenlbert-tiny
vishnun
2023-08-07T07:24:22Z
141
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "code", "nli", "en", "dataset:vishnun/CodevsNL", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-04T06:27:28Z
--- license: mit datasets: - vishnun/CodevsNL language: - en metrics: - accuracy library_name: transformers pipeline_tag: text-classification tags: - code - nli --- ## PreFace Code vs Natural language classification using bert-small from prajwall, below are the metrics achieved ## Training Metrics | Epoch | Training Loss | Validation Loss | Accuracy | |-------|---------------|-----------------|----------| | 1 | 0.022500 | 0.012705 | 0.997203 | | 2 | 0.008700 | 0.013107 | 0.996880 | | 3 | 0.002700 | 0.014081 | 0.997633 | | 4 | 0.001800 | 0.010666 | 0.997526 | | 5 | 0.000900 | 0.010800 | 0.998063 | ## More - Github repo for installable python package: https://github.com/Vishnunkumar - Space on the extraction of code blocks from screenshots: https://huggingface.co/spaces/vishnun/SnapCode
NhanHo185/falcon-7b-qlora-chat-support-bot-faq
NhanHo185
2023-08-07T07:18:43Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-07T02:45:55Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
Yong-Sik/xlm-roberta-base-finetuned-panx-de
Yong-Sik
2023-08-07T07:15:41Z
123
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-08-07T07:06:35Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.de split: validation args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8616659101225601 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1329 - F1: 0.8617 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2568 | 1.0 | 525 | 0.1583 | 0.8125 | | 0.1261 | 2.0 | 1050 | 0.1458 | 0.8473 | | 0.0823 | 3.0 | 1575 | 0.1329 | 0.8617 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.3 - Tokenizers 0.13.3
srikanthsri/Linxx
srikanthsri
2023-08-07T07:13:58Z
1
0
peft
[ "peft", "region:us" ]
null
2023-08-07T07:13:50Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
yonghun/q-FrozenLake-v1-4x4-Slippery0.97
yonghun
2023-08-07T07:01:45Z
0
0
null
[ "FrozenLake-v1-4x4", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-08-07T07:01:43Z
--- tags: - FrozenLake-v1-4x4 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-Slippery0.97 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4 type: FrozenLake-v1-4x4 metrics: - type: mean_reward value: 0.71 +/- 0.45 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="yonghun/q-FrozenLake-v1-4x4-Slippery0.97", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
bitwild/q-FrozenLake-v1-4x4-noSlippery
bitwild
2023-08-07T06:56:24Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-08-07T06:56:22Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="bitwild/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ThaumielSparrow/nnue-unet
ThaumielSparrow
2023-08-07T06:51:37Z
0
0
null
[ "region:us" ]
null
2023-08-07T06:49:17Z
# Efficiently-Updatable Neural Network (NNUE) Refactor of Classic U-Net Architecture for Membrane Segmentation ### Developed by Luzhou Zhang - Project still under development 🧠 ## Setup Clone into repository: `git clone https://github.com/ThaumielSparrow/cremi-nnue` Install dependencies: `pip install -r requirements.txt` Download CREMI training and test data [here](https://cremi.org/data/). Modify runtime variables in `main.py` and `train.py` and run program: `python main.py` Note: This project has only been tested and validated for Python 3.9.X and 3.10.X with frozen packages. It is likely that any Python version >3.7 supports it. ## Docs I'm not writing documentation lol
Stokrotka/q-FrozenLake-v1-4x4-noSlippery
Stokrotka
2023-08-07T06:45:25Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-01-09T20:59:30Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Stokrotka/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
yonghun/q-FrozenLake-v1-4x4-noSlippery
yonghun
2023-08-07T06:42:48Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-08-07T06:42:46Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="yonghun/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
jaswant50/distilbert-base-uncased-jaswant-base-finetuned
jaswant50
2023-08-07T06:41:04Z
0
0
transformers
[ "transformers", "text-classification", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
text-classification
2023-07-31T17:12:25Z
--- library_name: transformers pipeline_tag: text-classification --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
shtif/poca-SoccerTwos
shtif
2023-08-07T06:29:26Z
3
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-08-07T06:26:43Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: shtif/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
KUN810/lora_of_Benares_from_Honkai_Ipmact_3rd
KUN810
2023-08-07T06:26:02Z
0
0
null
[ "region:us" ]
null
2023-08-07T05:28:09Z
崩坏3贝纳勒斯的lora,由于图源较少因此有些过拟合。 例图别为本lora效果和配合细节增强(add_detail.safetensors)的效果。 !(https://huggingface.co/KUN810/lora_Honkai_Impact_3rd_Benares/blob/main/15974-832718923-dramatic%20angle%2C%20(honkai%20impact%203rd)%2C%20dutch%20angle%2C%20_(((masterpiece)))%2C%20((extremely%20detailed%20CG%20unity%204k%20wallpaper))%2C%20best%20quality.png " ") !(https://huggingface.co/KUN810/lora_Honkai_Impact_3rd_Benares/blob/main/15979-1747533505-dramatic%20angle%2C%20(honkai%20impact%203rd)%2C%20dutch%20angle%2C%20%2C%20_(((masterpiece)))%2C%20((extremely%20detailed%20CG%20unity%204k%20wallpaper))%2C%20best%20quali.png " ")
TheRains/yt-special-batch4-base
TheRains
2023-08-07T06:19:37Z
115
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "dataset:yt", "base_model:openai/whisper-base", "base_model:finetune:openai/whisper-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-08-07T05:07:54Z
--- license: apache-2.0 base_model: openai/whisper-base tags: - whisper-event - generated_from_trainer datasets: - yt metrics: - wer model-index: - name: Whisper Small Indonesian results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: yt id type: yt metrics: - name: Wer type: wer value: 66.04630049931912 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Indonesian This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the yt id dataset. It achieves the following results on the evaluation set: - Loss: 1.0175 - Wer: 66.0463 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4446 | 0.09 | 1000 | 1.2313 | 91.5959 | | 1.0599 | 0.17 | 2000 | 1.1312 | 106.3420 | | 1.1851 | 0.26 | 3000 | 1.0801 | 77.3166 | | 1.0325 | 0.34 | 4000 | 1.0380 | 71.8436 | | 1.008 | 0.43 | 5000 | 1.0175 | 66.0463 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
wonwonn/distilbert-base-uncased-finetuned-emotion
wonwonn
2023-08-07T05:58:33Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-08-07T05:29:53Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.921 - name: F1 type: f1 value: 0.9207589885424755 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2281 - Accuracy: 0.921 - F1: 0.9208 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8331 | 1.0 | 250 | 0.3266 | 0.904 | 0.9019 | | 0.2535 | 2.0 | 500 | 0.2281 | 0.921 | 0.9208 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.3 - Tokenizers 0.13.3
Moonforeva/ppo-Huggy
Moonforeva
2023-08-07T05:53:25Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-08-07T05:53:15Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Moonforeva/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Rihong/q-Taxi-v3
Rihong
2023-08-07T05:31:08Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-08-07T05:31:05Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.48 +/- 2.68 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Rihong/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```