modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
chickenrice0721/whisper-large-v3-translate-zh-v0.1-lt
chickenrice0721
2024-06-22T03:22:37Z
19
5
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "audio", "translate", "generated_from_trainer", "zh", "base_model:openai/whisper-large-v3", "base_model:finetune:openai/whisper-large-v3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-06-22T01:30:40Z
--- license: apache-2.0 base_model: openai/whisper-large-v3 tags: - audio - automatic-speech-recognition - translate - generated_from_trainer language: - zh metrics: - cer - wer model-index: - name: whisper-large-v3-translate-zh-v0.1-lt results: [] pipeline_tag: automatic-speech-recognition --- # whisper-large-v3-translate-zh-v0.1-lt This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3). ## Model description 3500小时 (日语音频,中文字幕) 数据微调, 翻译模式直出中文 ## Usage task='translate', language='ja' ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 4000 - dropout: 0.1 - mask_time_prob: 0.05 - mask_feature_prob: 0.2 - condition_on_previous_text_rate: 0.5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | Wer | |:-------------:|:------:|:-----:|:---------------:|:------:|:------:| | 2.1282 | 0.0739 | 1000 | 2.1852 | 1.9014 | 4.4904 | | 1.8567 | 0.1478 | 2000 | 1.8366 | 1.7295 | 3.8716 | | 1.6968 | 0.2217 | 3000 | 1.2615 | 1.6279 | 2.4825 | | 1.6264 | 0.2956 | 4000 | 1.0536 | 1.5625 | 1.8101 | | 1.5687 | 0.3695 | 5000 | 1.0932 | 1.5410 | 2.1218 | | 1.531 | 0.4433 | 6000 | 1.5156 | 1.2533 | 2.3689 | | 1.4875 | 0.5172 | 7000 | 1.4697 | 0.9560 | 1.5588 | | 1.4518 | 0.5911 | 8000 | 1.4521 | 1.0170 | 1.6392 | | 1.4472 | 0.6650 | 9000 | 1.4463 | 1.0084 | 1.6420 | | 1.3991 | 0.7389 | 10000 | 1.4238 | 0.9266 | 1.6992 | | 1.4266 | 0.8128 | 11000 | 1.4141 | 0.8365 | 1.3056 | | 1.3755 | 0.8867 | 12000 | 1.4033 | 0.7904 | 1.3119 | | 1.3833 | 0.9606 | 13000 | 1.4004 | 0.8600 | 1.3333 | | 1.3224 | 1.0345 | 14000 | 1.3770 | 0.8243 | 1.4560 | | 1.3295 | 1.1084 | 15000 | 1.3770 | 0.7852 | 1.4298 | | 1.3136 | 1.1823 | 16000 | 1.3564 | 0.7176 | 1.1826 | | 1.2832 | 1.2561 | 17000 | 1.3535 | 0.6767 | 1.1781 | | 1.2917 | 1.3300 | 18000 | 1.3584 | 0.7255 | 1.1218 | | 1.27 | 1.4039 | 19000 | 1.3330 | 0.6590 | 1.1242 | | 1.2704 | 1.4778 | 20000 | 1.3379 | 0.6934 | 1.1944 | | 1.2614 | 1.5517 | 21000 | 1.3330 | 0.6949 | 1.1820 | | 1.2455 | 1.6256 | 22000 | 1.3350 | 0.6931 | 1.0892 | | 1.2475 | 1.6995 | 23000 | 1.3154 | 0.6662 | 1.1576 | | 1.2583 | 1.7734 | 24000 | 1.3164 | 0.6490 | 1.0705 | | 1.2333 | 1.8473 | 25000 | 1.3184 | 0.6833 | 1.1480 | | 1.2462 | 1.9212 | 26000 | 1.3125 | 0.6672 | 1.1612 | | 1.2279 | 1.9950 | 27000 | 1.3047 | 0.6644 | 1.2179 | | 1.1908 | 2.0689 | 28000 | 1.3047 | 0.6938 | 1.2221 | | 1.1831 | 2.1428 | 29000 | 1.2998 | 0.6316 | 1.0717 | | 1.1705 | 2.2167 | 30000 | 1.3018 | 0.6165 | 1.0958 | | 1.171 | 2.2906 | 31000 | 1.3027 | 0.6109 | 1.0868 | | 1.1567 | 2.3645 | 32000 | 1.3037 | 0.6485 | 1.1736 | | 1.1705 | 2.4384 | 33000 | 1.2969 | 0.6078 | 1.0515 | | 1.1819 | 2.5123 | 34000 | 1.2949 | 0.6158 | 1.0362 | | 1.1447 | 2.5862 | 35000 | 1.2920 | 0.6365 | 1.0558 | | 1.17 | 2.6601 | 36000 | 1.2881 | 0.6339 | 1.0868 | | 1.1495 | 2.7340 | 37000 | 1.2949 | 0.6297 | 1.0437 | | 1.1395 | 2.8078 | 38000 | 1.2900 | 0.6285 | 1.1221 | | 1.15 | 2.8817 | 39000 | 1.2891 | 0.5997 | 1.0217 | | 1.1623 | 2.9556 | 40000 | 1.2881 | 0.6085 | 1.0395 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
janetsw/bas
janetsw
2024-06-22T03:21:00Z
92
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:stabilityai/stable-diffusion-2-1-base", "base_model:adapter:stabilityai/stable-diffusion-2-1-base", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-16T04:27:52Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1-base tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - janetsw/bas These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1-base. You can find some example images in the following.
Somnuk/distilbert-base-uncased-finetuned-imdb
Somnuk
2024-06-22T03:07:48Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "fill-mask", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-06-01T10:21:03Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4891 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6823 | 1.0 | 157 | 2.5090 | | 2.5864 | 2.0 | 314 | 2.4501 | | 2.528 | 3.0 | 471 | 2.4828 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.3.0 - Datasets 2.19.1 - Tokenizers 0.15.1
Rickliou/gemma-medical_qa-Finetune
Rickliou
2024-06-22T02:55:53Z
6
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-22T02:53:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ymoslem/whisper-medium-ga2en-v1.3.1-4k-r
ymoslem
2024-06-22T02:45:53Z
28
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ga", "en", "dataset:ymoslem/IWSLT2023-GA-EN", "dataset:ymoslem/FLEURS-GA-EN", "dataset:ymoslem/BitesizeIrish-GA-EN", "dataset:ymoslem/SpokenWords-GA-EN-MTed", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-06-21T23:50:45Z
--- language: - ga - en license: apache-2.0 base_model: openai/whisper-medium tags: - generated_from_trainer datasets: - ymoslem/IWSLT2023-GA-EN - ymoslem/FLEURS-GA-EN - ymoslem/BitesizeIrish-GA-EN - ymoslem/SpokenWords-GA-EN-MTed metrics: - bleu - wer model-index: - name: Whisper Medium GA-EN Speech Translation Raw results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: IWSLT-2023, FLEURS, BiteSize, and SpokenWords type: ymoslem/IWSLT2023-GA-EN metrics: - name: Bleu type: bleu value: 26.56 - name: Wer type: wer value: 76.67717244484467 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Medium GA-EN Speech Translation Raw This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the IWSLT-2023, FLEURS, BiteSize, and SpokenWords dataset. It achieves the following results on the evaluation set: - Loss: 1.5187 - Bleu: 26.56 - Chrf: 46.91 - Wer: 76.6772 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Bleu | Chrf | Validation Loss | Wer | |:-------------:|:------:|:----:|:-----:|:-----:|:---------------:|:--------:| | 2.5874 | 0.0539 | 100 | 4.9 | 19.49 | 2.1785 | 114.0027 | | 2.3237 | 0.1079 | 200 | 6.48 | 22.77 | 2.1129 | 151.8235 | | 2.192 | 0.1618 | 300 | 7.92 | 25.9 | 2.0182 | 148.6718 | | 1.9861 | 0.2157 | 400 | 10.55 | 28.55 | 1.8607 | 121.0266 | | 1.8893 | 0.2697 | 500 | 16.68 | 33.64 | 1.8560 | 89.7794 | | 1.8526 | 0.3236 | 600 | 8.83 | 30.12 | 1.7738 | 166.9968 | | 1.6537 | 0.3776 | 700 | 10.94 | 33.83 | 1.6781 | 152.2287 | | 1.7103 | 0.4315 | 800 | 16.9 | 36.4 | 1.6389 | 92.2557 | | 1.4837 | 0.4854 | 900 | 13.81 | 34.5 | 1.6077 | 124.2233 | | 1.2784 | 0.5394 | 1000 | 14.79 | 37.53 | 1.6103 | 116.3440 | | 1.111 | 0.5933 | 1100 | 19.31 | 39.0 | 1.5579 | 93.6965 | | 1.167 | 0.6472 | 1200 | 20.88 | 41.7 | 1.5210 | 91.6704 | | 1.2217 | 0.7012 | 1300 | 21.29 | 41.72 | 1.4719 | 84.9167 | | 1.0613 | 0.7551 | 1400 | 28.3 | 44.37 | 1.4663 | 67.1319 | | 0.9256 | 0.8091 | 1500 | 27.5 | 45.59 | 1.4258 | 68.7078 | | 0.8023 | 0.8630 | 1600 | 27.1 | 46.27 | 1.4027 | 72.7600 | | 0.8327 | 0.9169 | 1700 | 27.03 | 46.19 | 1.3784 | 73.0302 | | 0.7019 | 0.9709 | 1800 | 28.91 | 46.34 | 1.4127 | 67.4921 | | 0.2681 | 1.0248 | 1900 | 28.53 | 47.12 | 1.3955 | 68.3026 | | 0.2659 | 1.0787 | 2000 | 28.37 | 45.85 | 1.4194 | 68.1225 | | 0.4202 | 1.1327 | 2100 | 1.5449| 27.53 | 44.0 | 69.8784 | | 0.4212 | 1.1866 | 2200 | 1.6060| 25.89 | 43.05 | 70.1036 | | 0.4124 | 1.2406 | 2300 | 1.6167| 24.31 | 41.55 | 75.8217 | | 0.4696 | 1.2945 | 2400 | 1.5904| 21.79 | 41.86 | 85.0968 | | 0.4018 | 1.3484 | 2500 | 1.6300| 25.36 | 43.45 | 76.4070 | | 0.4549 | 1.4024 | 2600 | 1.5540| 26.06 | 44.27 | 71.9946 | | 0.4018 | 1.4563 | 2700 | 1.5721| 26.22 | 45.42 | 72.9851 | | 0.3534 | 1.5102 | 2800 | 1.5488| 23.65 | 44.43 | 80.0090 | | 0.2907 | 1.5642 | 2900 | 1.5494| 24.04 | 42.57 | 75.3715 | | 0.3117 | 1.6181 | 3000 | 1.5691| 28.27 | 45.06 | 67.2670 | | 0.3379 | 1.6721 | 3100 | 1.4951| 30.52 | 47.42 | 65.5561 | | 0.3686 | 1.7260 | 3200 | 1.5010| 30.7 | 48.13 | 64.8357 | | 0.2855 | 1.7799 | 3300 | 1.5197| 27.19 | 46.47 | 74.5610 | | 0.2919 | 1.8339 | 3400 | 1.4974| 31.39 | 48.56 | 63.5299 | | 0.2582 | 1.8878 | 3500 | 1.4779| 30.18 | 48.54 | 64.9257 | | 0.2523 | 1.9417 | 3600 | 1.4835| 30.29 | 47.07 | 66.6367 | | 0.2005 | 1.9957 | 3700 | 1.4682| 29.89 | 47.95 | 68.2125 | | 0.0617 | 2.0496 | 3800 | 1.5221| 29.49 | 47.1 | 67.6272 | | 0.0661 | 2.1036 | 3900 | 1.5142| 26.93 | 46.91 | 75.8217 | | 0.0609 | 2.1575 | 4000 | 1.5187| 26.56 | 46.91 | 76.6772 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.2.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
hdve/Qwen-Qwen1.5-1.8B-1719023463
hdve
2024-06-22T02:31:06Z
5
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
2024-06-22T02:31:03Z
--- base_model: Qwen/Qwen1.5-1.8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
ZainabNac/Translation-Darija-to-English
ZainabNac
2024-06-22T02:26:51Z
1
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-06-18T17:54:14Z
--- license: apache-2.0 --- Ce modèle est pré-entraîné pour effectuer la traduction automatique neuronale entre le dialecte marocain et l'anglais. Il intégre les embeddings d'un modèle de langue (BERT) dans une architecture d'encodeur-décodeur pour améliorer la qualité de la traduction entre ces deux langues. L'entraînement de ce modèle est inspiré par cet article : Dans cet entraînement, nous avons utilisé différentes versions de BERT pour l'anglais et DarijaBERT pour le dialecte marocain
magnifi/parser_user_v8-0621-epoch8-0.002_nosystemprompt
magnifi
2024-06-22T02:17:01Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-06-22T02:15:00Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit --- # Uploaded model - **Developed by:** magnifi - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-ql-merge
Fischerboot
2024-06-22T02:13:52Z
102
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP", "base_model:merge:Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP", "base_model:Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-QLORA", "base_model:merge:Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-QLORA", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-22T02:05:16Z
--- base_model: - Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP - Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-QLORA library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP](https://huggingface.co/Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP) + [Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-QLORA](https://huggingface.co/Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-QLORA) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP+Fischerboot/LLama3-Lexi-Aura-3Some-SLERP-SLERP-QLORA merge_method: passthrough dtype: bfloat16 ```
Masterjp123/AI-MODEL-IDEAS-AND-TECHNICES
Masterjp123
2024-06-22T02:09:20Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-09T21:23:30Z
--- license: apache-2.0 --- # This page Only exists to give both Ideas and my OPINION about LLMs and other AI things ## 1: LLMs LLama3: So What I have noticed is that LLama3 based LLMs are smart, Dare I say they are smarter than I could imagine they'd be, But, A big problem with them especially for Writing, Or roleplay tasks, is that they do not have a style. For The RP LLMs we need something like MythosMax, Now how would this work. Well Someone who is way more skilled at LLM Training and Lora making, should make models that replicate the Huginn, Chronos and Airoboros Models, We Already have a Hermes Model. Then Well Recreate MythosMax, since it brought a new Era of RP LLMs, if it was recreated in LLama3 once people figure out how to fully Crack LLama 3 to be fully uncensored, it will make it so the models will be better, since Any Model that used MythosMax had better performance. NOTE: Stheno 3.2 fixed all of the problems now it just needs to be finetuned more. ## 2: TTS (Diffusion Models) SD Models: With the current state of SD3 not getting released, my faith in SAI Is waning, So Time to look for another company, Or person. UPDATE On Stability Ai: Damm They really just shot themselves in the foot huh... <center><div style="width: 100%;"> <img src="https://huggingface.co/Masterjp123/AI-MODEL-IDEAS-AND-TECHNICES/blob/main/ezgif-2-751594434b.gif" style="display: block; margin: auto;"> </div></center> Damm it Stability AI, You guys really did crap the bed after Emad left, after all if SDXL licensing wasn't bad enough, SD3 licicing is more crap than what the internet is giving them right now. Anyways the only way Stability AI can keep their reputation with the open source community is if they fix their licicing and also maybe release the 800 Million Parameter version of SD3. Diffusion Models: For Diffusion models the best way to basically dig Stability AI’s grave which might be the only way to get good models with better licensing, is to release a model with the upsides of a DIT model with the size around Sd1.5 which will make the ULTIMATE MODEL: Patent NOT pending. ## License for My Ideas and techniques that might be here one day: Well Use'em how ever you'd like, just like put my name somewhere and make the thing you make open source since that might be nice... Also probably don't take my advice lol, After all I can't even create an Ai model I only merge them :(
M2LabOrg/whisper-small-lv
M2LabOrg
2024-06-22T02:08:16Z
13
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "lv", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-06-21T07:58:20Z
--- language: - lv license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper small lv - Michel Mesquita results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 config: lv split: None args: 'config: lv, split: test' metrics: - name: Wer type: wer value: 35.363774156877604 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper small lv - Michel Mesquita This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.5957 - Wer: 35.3638 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:----:|:---------------:|:-------:| | 0.0043 | 12.9870 | 1000 | 0.4936 | 36.7753 | | 0.0005 | 25.9740 | 2000 | 0.5587 | 35.4585 | | 0.0003 | 38.9610 | 3000 | 0.5855 | 35.3448 | | 0.0002 | 51.9481 | 4000 | 0.5957 | 35.3638 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
Warairo/Yolo_464_anime
Warairo
2024-06-22T02:01:52Z
11
3
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:John6666/mala-anime-mix-nsfw-pony-xl-v3-sdxl", "base_model:adapter:John6666/mala-anime-mix-nsfw-pony-xl-v3-sdxl", "license:mit", "region:us" ]
text-to-image
2024-06-22T02:00:41Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: "S\0e\0e\0l\0e\0,\0 \0b\0i\0k\0i\0n\0i\0,\0 \0b\0e\0a\0c\0h\0,\0 \0b\0i\0g\0b\0r\0e\0a\0s\0t\0 \0,\0f\0r\0o\0m\0 \0b\0e\0h\0i\0n\0d\0" output: url: images/Yolo_464_animeeeè_e000008_01_20240621193702.png - text: "r\0i\0o\0t\0s\0u\0k\0a\0t\0s\0u\0k\0i\0,\0c\0a\0t\0 \0e\0a\0r\0s\0,\0 \0s\0c\0h\0o\0o\0l\0 \0u\0n\0i\0f\0o\0r\0m\0,\0o\0n\0 \0b\0e\0d\0" output: url: images/Yolo_464_animeeeè_e000008_02_20240621193714.png base_model: John6666/mala-anime-mix-nsfw-pony-xl-v3-sdxl instance_prompt: Seele, riotsukatsuki license: mit --- # Yolo_464_animeeee <Gallery /> ## Model description none ## Trigger words You should use `Seele` to trigger the image generation. You should use `riotsukatsuki` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Warairo/Yolo_464_anime/tree/main) them in the Files & versions tab.
Bakanayatsu/NeuralHermes-2.5-Mistral-7B-laser-Q4_K_S-GGUF
Bakanayatsu
2024-06-22T01:35:48Z
8
0
null
[ "gguf", "mistral", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "distillation", "dpo", "rlhf", "laser", "llama-cpp", "gguf-my-repo", "en", "dataset:mlabonne/chatml_dpo_pairs", "base_model:mlabonne/NeuralHermes-2.5-Mistral-7B-laser", "base_model:quantized:mlabonne/NeuralHermes-2.5-Mistral-7B-laser", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-06-22T01:35:30Z
--- base_model: mlabonne/NeuralHermes-2.5-Mistral-7B-laser datasets: - mlabonne/chatml_dpo_pairs language: - en license: apache-2.0 tags: - mistral - instruct - finetune - chatml - gpt4 - synthetic data - distillation - dpo - rlhf - laser - llama-cpp - gguf-my-repo model-index: - name: NeuralHermes-2.5-Mistral-7B-laser results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 66.38 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B-laser name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.09 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B-laser name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.43 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B-laser name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 54.95 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B-laser name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.14 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B-laser name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 55.72 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B-laser name: Open LLM Leaderboard --- # Bakanayatsu/NeuralHermes-2.5-Mistral-7B-laser-Q4_K_S-GGUF This model was converted to GGUF format from [`mlabonne/NeuralHermes-2.5-Mistral-7B-laser`](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B-laser) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B-laser) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Bakanayatsu/NeuralHermes-2.5-Mistral-7B-laser-Q4_K_S-GGUF --hf-file neuralhermes-2.5-mistral-7b-laser-q4_k_s-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Bakanayatsu/NeuralHermes-2.5-Mistral-7B-laser-Q4_K_S-GGUF --hf-file neuralhermes-2.5-mistral-7b-laser-q4_k_s-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Bakanayatsu/NeuralHermes-2.5-Mistral-7B-laser-Q4_K_S-GGUF --hf-file neuralhermes-2.5-mistral-7b-laser-q4_k_s-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Bakanayatsu/NeuralHermes-2.5-Mistral-7B-laser-Q4_K_S-GGUF --hf-file neuralhermes-2.5-mistral-7b-laser-q4_k_s-imat.gguf -c 2048 ```
davidyu2023/Qwen-Qwen1.5-7B-1719020015
davidyu2023
2024-06-22T01:33:43Z
5
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-7B", "base_model:adapter:Qwen/Qwen1.5-7B", "region:us" ]
null
2024-06-22T01:33:35Z
--- base_model: Qwen/Qwen1.5-7B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
EleutherAI/Meta-Llama-3-8B-population-random-standardized-many-random-names
EleutherAI
2024-06-22T01:11:08Z
12
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T21:16:25Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PhillipGuo/hp-lat-llama-PCA-epsilon1.5-pgd_layer8-def_layer8_9_10_11_12_13_14-wikitext-fullrank-71
PhillipGuo
2024-06-22T01:01:04Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-22T00:55:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PhillipGuo/hp-lat-llama-PCA-epsilon6.0-pgd_layer8-def_layer8_9_10_11_12_13_14-wikitext-fullrank-71
PhillipGuo
2024-06-22T00:58:32Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-22T00:56:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Yuxuan-Qiao/PrismCaptioner-7B
Yuxuan-Qiao
2024-06-22T00:58:18Z
0
1
prismcaptioner
[ "prismcaptioner", "safetensors", "image-text-to-text", "dataset:FreedomIntelligence/ALLaVA-4V", "arxiv:2406.14544", "license:cc-by-4.0", "region:us" ]
image-text-to-text
2024-05-29T06:03:33Z
--- license: cc-by-4.0 datasets: - FreedomIntelligence/ALLaVA-4V pipeline_tag: image-text-to-text library_name: prismcaptioner --- <br> # PrismCaptioner Model Card **Model details** PrismCaptioners are open-source captioners with LLaVA architecture finetuned on GPT4V-assisted dataset [ALLaVA](https://huggingface.co/datasets/FreedomIntelligence/ALLaVA-4V). We have released [PrismCaptioner-7B](https://huggingface.co/Yuxuan-Qiao/PrismCaptioner-7B) and [PrismCaptioner-2B](https://huggingface.co/Yuxuan-Qiao/PrismCaptioner-7B). PrismCaptioner-7B details - **Vision Backbone:** google/siglip-so400m-patch14-384 - **Language Backbone:** internlm/internlm2-7b - **Dataset:** 1x ALLaVA-Caption-[LAION/VFLAN] **Paper and codebase for more information:** [[Paper](https://arxiv.org/abs/2406.14544)] [[Code](https://github.com/SparksJoe/Prism)] **Intended uses** - **Perception Module:** The model can be integrated into [Prism](https://github.com/SparksJoe/Prism) as a perception module to solve vision-language task by utilizing an external LLM. - **Effective Captioner:** The model can produce high-quality captions for given images. **Model usage** Clone the [Prism](https://github.com/SparksJoe/Prism) repo and complete the [preparation](https://github.com/SparksJoe/Prism/tree/main?tab=readme-ov-file#preparation). You can use PrismCaptioners following [usage](https://github.com/SparksJoe/Prism/blob/main/README.md#usage) or demo below. ```python # In the Prism repo folder from decouple import supported_VLM model = supported_VLM['prismcaptioner-7b']() res = model.generate(['assets/case1.png', 'Given the image below, please provide a detailed description of what you see.']) ```
PhillipGuo/hp-lat-llama-PCA-epsilon0.5-pgd_layer8-def_layer8_9_10_11_12_13_14-wikitext-fullrank-71
PhillipGuo
2024-06-22T00:57:00Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-22T00:54:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bobbyw/deberta-v3-large_v3_relationships
bobbyw
2024-06-22T00:51:36Z
16
0
transformers
[ "transformers", "tensorboard", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:bobbyw/deberta-v3-large_v3_relationships", "base_model:finetune:bobbyw/deberta-v3-large_v3_relationships", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-13T19:19:11Z
--- license: mit base_model: bobbyw/deberta-v3-large_v3_relationships tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: deberta-v3-large_v3_relationships results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large_v3_relationships This model is a fine-tuned version of [bobbyw/deberta-v3-large_v3_relationships](https://huggingface.co/bobbyw/deberta-v3-large_v3_relationships) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0124 - Accuracy: 0.0038 - F1: 0.0038 - Precision: 0.0019 - Recall: 0.4242 - Learning Rate: 0.002 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Rate | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:-----:| | No log | 1.0 | 382 | 0.0109 | 0.0027 | 0.0046 | 0.0023 | 0.5152 | 0.002 | | 0.0043 | 2.0 | 764 | 0.0106 | 0.0026 | 0.0046 | 0.0023 | 0.5152 | 0.002 | | 0.0041 | 3.0 | 1146 | 0.0115 | 0.0028 | 0.0051 | 0.0026 | 0.5758 | 0.002 | | 0.004 | 4.0 | 1528 | 0.0132 | 0.0031 | 0.0048 | 0.0024 | 0.5455 | 0.002 | | 0.004 | 5.0 | 1910 | 0.0140 | 0.0032 | 0.0046 | 0.0023 | 0.5152 | 0.002 | | 0.0043 | 6.0 | 2292 | 0.0116 | 0.0032 | 0.0046 | 0.0023 | 0.5152 | 0.002 | | 0.0044 | 7.0 | 2674 | 0.0129 | 0.0027 | 0.0043 | 0.0022 | 0.4848 | 0.002 | | 0.0038 | 8.0 | 3056 | 0.0124 | 0.0038 | 0.0038 | 0.0019 | 0.4242 | 0.002 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
mradermacher/Shark-1-Ogno-7b-passthrough-GGUF
mradermacher
2024-06-22T00:47:26Z
28
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "powermove72/Shark-1", "eren23/OGNO-7b-dpo-truthful", "en", "base_model:powermove72/Shark-1-Ogno-7b-passthrough", "base_model:quantized:powermove72/Shark-1-Ogno-7b-passthrough", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-21T23:59:40Z
--- base_model: powermove72/Shark-1-Ogno-7b-passthrough language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - powermove72/Shark-1 - eren23/OGNO-7b-dpo-truthful --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/powermove72/Shark-1-Ogno-7b-passthrough <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-7b-passthrough-GGUF/resolve/main/Shark-1-Ogno-7b-passthrough.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-7b-passthrough-GGUF/resolve/main/Shark-1-Ogno-7b-passthrough.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-7b-passthrough-GGUF/resolve/main/Shark-1-Ogno-7b-passthrough.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-7b-passthrough-GGUF/resolve/main/Shark-1-Ogno-7b-passthrough.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-7b-passthrough-GGUF/resolve/main/Shark-1-Ogno-7b-passthrough.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-7b-passthrough-GGUF/resolve/main/Shark-1-Ogno-7b-passthrough.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-7b-passthrough-GGUF/resolve/main/Shark-1-Ogno-7b-passthrough.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-7b-passthrough-GGUF/resolve/main/Shark-1-Ogno-7b-passthrough.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-7b-passthrough-GGUF/resolve/main/Shark-1-Ogno-7b-passthrough.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-7b-passthrough-GGUF/resolve/main/Shark-1-Ogno-7b-passthrough.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-7b-passthrough-GGUF/resolve/main/Shark-1-Ogno-7b-passthrough.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-7b-passthrough-GGUF/resolve/main/Shark-1-Ogno-7b-passthrough.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-7b-passthrough-GGUF/resolve/main/Shark-1-Ogno-7b-passthrough.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-7b-passthrough-GGUF/resolve/main/Shark-1-Ogno-7b-passthrough.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Shark-1-Ogno-7b-passthrough-GGUF/resolve/main/Shark-1-Ogno-7b-passthrough.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mlx-community/instruction-pretrain-instruction-synthesizer
mlx-community
2024-06-22T00:39:50Z
7
4
mlx
[ "mlx", "safetensors", "mistral", "en", "dataset:instruction-pretrain/ft-instruction-synthesizer-collection", "arxiv:2406.14491", "license:apache-2.0", "region:us" ]
null
2024-06-21T16:32:58Z
--- datasets: - instruction-pretrain/ft-instruction-synthesizer-collection language: - en license: apache-2.0 tags: - mlx --- # mlx-community/instruction-pretrain-instruction-synthesizer The Model [mlx-community/instruction-pretrain-instruction-synthesizer](https://huggingface.co/mlx-community/instruction-pretrain-instruction-synthesizer) was converted to MLX format from [instruction-pretrain/instruction-synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer) using mlx-lm version **0.14.3**. Original paper: [Instruction Pre-Training: Language Models are Supervised Multitask Learners](https://huggingface.co/papers/2406.14491) ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/instruction-pretrain-instruction-synthesizer") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q8_0-GGUF
bendavis78
2024-06-22T00:22:55Z
11
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "dataset:ehartford/dolphin", "dataset:jondurbin/airoboros-2.2.1", "dataset:ehartford/dolphin-coder", "dataset:teknium/openhermes", "dataset:ise-uiuc/Magicoder-OSS-Instruct-75K", "dataset:ise-uiuc/Magicoder-Evol-Instruct-110K", "dataset:LDJnr/Capybara", "base_model:LoneStriker/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2", "base_model:quantized:LoneStriker/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-22T00:22:53Z
--- base_model: LoneStriker/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2 datasets: - ehartford/dolphin - jondurbin/airoboros-2.2.1 - ehartford/dolphin-coder - teknium/openhermes - ise-uiuc/Magicoder-OSS-Instruct-75K - ise-uiuc/Magicoder-Evol-Instruct-110K - LDJnr/Capybara language: - en license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- # bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q8_0-GGUF This model was converted to GGUF format from [`LoneStriker/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2`](https://huggingface.co/LoneStriker/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/LoneStriker/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q8_0-GGUF --hf-file dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q8_0-GGUF --hf-file dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q8_0-GGUF --hf-file dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q8_0-GGUF --hf-file dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-q8_0.gguf -c 2048 ```
bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q4_K_M-GGUF
bendavis78
2024-06-22T00:20:56Z
11
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "dataset:ehartford/dolphin", "dataset:jondurbin/airoboros-2.2.1", "dataset:ehartford/dolphin-coder", "dataset:teknium/openhermes", "dataset:ise-uiuc/Magicoder-OSS-Instruct-75K", "dataset:ise-uiuc/Magicoder-Evol-Instruct-110K", "dataset:LDJnr/Capybara", "base_model:LoneStriker/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2", "base_model:quantized:LoneStriker/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-22T00:20:55Z
--- base_model: LoneStriker/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2 datasets: - ehartford/dolphin - jondurbin/airoboros-2.2.1 - ehartford/dolphin-coder - teknium/openhermes - ise-uiuc/Magicoder-OSS-Instruct-75K - ise-uiuc/Magicoder-Evol-Instruct-110K - LDJnr/Capybara language: - en license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- # bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q4_K_M-GGUF This model was converted to GGUF format from [`LoneStriker/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2`](https://huggingface.co/LoneStriker/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/LoneStriker/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q4_K_M-GGUF --hf-file dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q4_K_M-GGUF --hf-file dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q4_K_M-GGUF --hf-file dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo bendavis78/dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-Q4_K_M-GGUF --hf-file dolphin-2.7-mixtral-8x7b-3.5bpw-h6-exl2-q4_k_m.gguf -c 2048 ```
ymoslem/whisper-medium-ga2en-v5.3.1-4k-r
ymoslem
2024-06-21T23:56:37Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ga", "en", "dataset:ymoslem/IWSLT2023-GA-EN", "dataset:ymoslem/FLEURS-GA-EN", "dataset:ymoslem/BitesizeIrish-GA-EN", "dataset:ymoslem/SpokenWords-GA-EN-MTed", "dataset:ymoslem/Tatoeba-Speech-Irish", "dataset:ymoslem/Wikimedia-Speech-Irish", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-06-21T18:30:40Z
--- language: - ga - en license: apache-2.0 base_model: openai/whisper-medium tags: - generated_from_trainer datasets: - ymoslem/IWSLT2023-GA-EN - ymoslem/FLEURS-GA-EN - ymoslem/BitesizeIrish-GA-EN - ymoslem/SpokenWords-GA-EN-MTed - ymoslem/Tatoeba-Speech-Irish - ymoslem/Wikimedia-Speech-Irish metrics: - bleu - wer model-index: - name: Whisper Medium GA-EN Speech Translation results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia type: ymoslem/IWSLT2023-GA-EN metrics: - name: Bleu type: bleu value: 29.54 - name: Wer type: wer value: 62.40432237730752 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Medium GA-EN Speech Translation This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the IWSLT-2023, FLEURS, BiteSize, SpokenWords, Tatoeba, and Wikimedia dataset. It achieves the following results on the evaluation set: - Loss: 1.1929 - Bleu: 29.54 - Chrf: 51.58 - Wer: 62.4043 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Chrf | Wer | |:-------------:|:------:|:----:|:---------------:|:-----:|:-----:|:--------:| | 2.4382 | 0.0109 | 100 | 2.1114 | 3.07 | 16.85 | 171.0491 | | 2.6151 | 0.0219 | 200 | 2.0207 | 6.25 | 23.02 | 126.9698 | | 2.5699 | 0.0328 | 300 | 1.8660 | 5.71 | 24.03 | 155.5606 | | 2.3084 | 0.0438 | 400 | 1.8084 | 9.87 | 28.45 | 129.0860 | | 2.3327 | 0.0547 | 500 | 1.7823 | 12.01 | 31.92 | 102.7915 | | 2.1495 | 0.0657 | 600 | 1.7238 | 13.97 | 32.4 | 98.6042 | | 2.2164 | 0.0766 | 700 | 1.6538 | 11.21 | 33.19 | 146.0153 | | 2.0071 | 0.0876 | 800 | 1.7038 | 14.34 | 35.72 | 96.9383 | | 1.8334 | 0.0985 | 900 | 1.6329 | 16.51 | 37.23 | 96.8032 | | 1.8359 | 0.1095 | 1000 | 1.6637 | 17.87 | 35.94 | 84.4665 | | 1.7703 | 0.1204 | 1100 | 1.5626 | 19.54 | 39.02 | 79.7839 | | 1.5805 | 0.1314 | 1200 | 1.5618 | 20.19 | 40.4 | 77.8028 | | 1.4545 | 0.1423 | 1300 | 1.5599 | 13.88 | 35.53 | 112.5619 | | 1.5177 | 0.1533 | 1400 | 1.4880 | 18.79 | 40.11 | 84.6916 | | 1.6335 | 0.1642 | 1500 | 1.4996 | 16.41 | 38.64 | 96.9833 | | 1.3809 | 0.1752 | 1600 | 1.4739 | 18.3 | 40.17 | 101.8910 | | 1.2694 | 0.1861 | 1700 | 1.4498 | 22.53 | 43.15 | 76.9923 | | 1.2321 | 0.1970 | 1800 | 1.4163 | 19.92 | 42.59 | 84.6015 | | 1.1969 | 0.2080 | 1900 | 1.4137 | 21.63 | 44.92 | 85.3670 | | 1.2023 | 0.2189 | 2000 | 1.3530 | 20.42 | 41.57 | 82.8906 | | 1.1676 | 0.2299 | 2100 | 1.3723 | 22.82 | 44.23 | 78.1180 | | 1.0332 | 0.2408 | 2200 | 1.3641 | 26.73 | 44.75 | 70.2386 | | 0.8589 | 0.2518 | 2300 | 1.3344 | 26.94 | 46.89 | 72.7600 | | 0.9829 | 0.2627 | 2400 | 1.3181 | 28.15 | 47.21 | 69.1130 | | 0.8228 | 0.2737 | 2500 | 1.3049 | 26.98 | 47.41 | 74.0207 | | 0.7667 | 0.2846 | 2600 | 1.2698 | 30.0 | 49.42 | 65.1058 | | 0.8749 | 0.2956 | 2700 | 1.2878 | 27.91 | 47.67 | 66.9518 | | 0.7504 | 0.3065 | 2800 | 1.2670 | 32.03 | 50.35 | 63.6650 | | 0.7069 | 0.3175 | 2900 | 1.2771 | 30.7 | 49.53 | 64.4304 | | 0.7199 | 0.3284 | 3000 | 1.2658 | 30.21 | 48.93 | 65.5561 | | 0.6207 | 0.3394 | 3100 | 1.2687 | 30.82 | 49.11 | 66.0063 | | 0.5995 | 0.3503 | 3200 | 1.2207 | 31.99 | 50.94 | 62.9446 | | 0.6294 | 0.3612 | 3300 | 1.2422 | 31.05 | 50.85 | 64.7006 | | 0.4612 | 0.3722 | 3400 | 1.2203 | 33.1 | 51.82 | 61.9090 | | 0.5138 | 0.3831 | 3500 | 1.2007 | 32.08 | 51.86 | 63.0797 | | 0.5059 | 0.3941 | 3600 | 1.2130 | 31.8 | 51.19 | 63.9352 | | 0.417 | 0.4050 | 3700 | 1.1975 | 32.45 | 51.41 | 62.2692 | | 0.2958 | 0.4160 | 3800 | 1.2046 | 29.29 | 51.39 | 62.7645 | | 0.393 | 0.4269 | 3900 | 1.1968 | 28.95 | 51.45 | 63.1697 | | 0.3858 | 0.4379 | 4000 | 1.1929 | 29.54 | 51.58 | 62.4043 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.2.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
Sobeih22/t5-finetuned-ar-to-arsl_test
Sobeih22
2024-06-21T23:48:06Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:PRAli22/arat5-arabic-dialects-translation", "base_model:finetune:PRAli22/arat5-arabic-dialects-translation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-06-21T20:46:06Z
--- license: apache-2.0 base_model: PRAli22/arat5-arabic-dialects-translation tags: - generated_from_trainer model-index: - name: t5-finetuned-ar-to-arsl_test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-finetuned-ar-to-arsl_test This model is a fine-tuned version of [PRAli22/arat5-arabic-dialects-translation](https://huggingface.co/PRAli22/arat5-arabic-dialects-translation) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3309 - Bleu1: 0.9310 - Bleu2: 0.8974 - Bleu3: 0.7219 - Bleu4: 0.5884 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu1 | Bleu2 | Bleu3 | Bleu4 | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:------:| | No log | 1.0 | 59 | 0.4660 | 0.8440 | 0.7803 | 0.5955 | 0.4626 | | No log | 2.0 | 118 | 0.3223 | 0.8966 | 0.8500 | 0.6689 | 0.5321 | | No log | 2.99 | 177 | 0.3004 | 0.9170 | 0.8791 | 0.7022 | 0.5644 | | No log | 3.99 | 236 | 0.2925 | 0.9205 | 0.8834 | 0.7071 | 0.5703 | | No log | 4.99 | 295 | 0.3099 | 0.9223 | 0.8859 | 0.7090 | 0.5716 | | No log | 5.99 | 354 | 0.2879 | 0.9244 | 0.8892 | 0.7125 | 0.5768 | | No log | 6.99 | 413 | 0.2971 | 0.9280 | 0.8936 | 0.7176 | 0.5824 | | No log | 8.0 | 473 | 0.2986 | 0.9254 | 0.8899 | 0.7136 | 0.5800 | | 0.3874 | 9.0 | 532 | 0.3128 | 0.9293 | 0.8952 | 0.7204 | 0.5874 | | 0.3874 | 10.0 | 591 | 0.3166 | 0.9316 | 0.8992 | 0.7242 | 0.5907 | | 0.3874 | 10.99 | 650 | 0.3270 | 0.9303 | 0.8964 | 0.7214 | 0.5861 | | 0.3874 | 11.99 | 709 | 0.3290 | 0.9304 | 0.8961 | 0.7223 | 0.5883 | | 0.3874 | 12.99 | 768 | 0.3326 | 0.9296 | 0.8957 | 0.7216 | 0.5880 | | 0.3874 | 13.99 | 827 | 0.3309 | 0.9294 | 0.8959 | 0.7208 | 0.5870 | | 0.3874 | 14.97 | 885 | 0.3309 | 0.9310 | 0.8974 | 0.7219 | 0.5884 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
suyash2739/English_to_Hinglish_fintuned_lamma_3_8b_instruct
suyash2739
2024-06-21T23:38:55Z
24
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "hinglish", "translation", "english to hinglish", "language translation", "text-generation-inference", "unsloth", "trl", "en to hi", "multilingual", "hindi codemix", "opensource", "en", "hi", "dataset:suyash2739/News_Hinglish_English", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
translation
2024-05-25T06:10:31Z
--- language: - en - hi license: apache-2.0 tags: - hinglish - translation - english to hinglish - language translation - text-generation-inference - transformers - unsloth - llama - trl - en to hi - multilingual - hindi codemix - opensource base_model: unsloth/llama-3-8b-Instruct-bnb-4bit datasets: - suyash2739/News_Hinglish_English --- [!["Buy Me A Coffee"](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://buymeacoffee.com/suyash008) # Dataset This is a dataset curated and made by me. You can buy it here. (https://buymeacoffee.com/suyash008/e/268592) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65187b234965add2b08b2990/Qdr5bXsvsjPNF0DClmgus.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65187b234965add2b08b2990/MCp_zRz310ln004mnXKQh.png) # My Linkedin Linkedin- [https://www.linkedin.com/in/suyash-ag/ ] Github- [https://github.com/Suyash018 ] # Project - A English to Hinglish Language Translater. This Project aims to develop a high-performance language translation model capable of translating standard English to Hinglish (a blend of Hindi and English commonly used in informal communication in India). # Loss Curve ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65187b234965add2b08b2990/31vSqxldRSGEDNGwrJbFy.png) # Inference / How to use the model: ``` !pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git" !pip install --no-deps xformers trl peft accelerate bitsandbytes ``` ```python from unsloth import FastLanguageModel import torch max_seq_length = 2048 dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+ load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False. model, tokenizer = FastLanguageModel.from_pretrained( model_name = "suyash2739/English_to_Hinglish_fintuned_lamma_3_8b_instruct", max_seq_length = max_seq_length, dtype = dtype, load_in_4bit = load_in_4bit, ) ``` ```python def pipe(text): prompt = """Translate the input from English to Hinglish to give the response. ### Input: {} ### Response: """ inputs = tokenizer( [ prompt.format(text), ], return_tensors = "pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens = 2048, use_cache = True) raw_text = tokenizer.batch_decode(outputs)[0] return raw_text.split("### Response:\n")[1].split("<|eot_id|>")[0] ``` ```python text = "This is a fine-tuned Hinglish translation model using Llama 3." # INPUT print(pipe(text)) ## Yeh ek fine-tuned Hinglish translation model hai jo Llama 3 ka istemal karta hai. ``` # Comaprision - English ```python English = """Finance Minister Nirmala Sitharaman said, "There used to be a poverty index...a human development index and all of them continue, but today what is keenly watched is VIX, the volatility index of the markets." Stability of the government is important for markets to be efficient, she stated. PM Narendra Modi's third term will make markets function with stability, she added.""" ``` - Gpt 4o ```python Gpt 4o = """ Finance Minister Nirmala Sitharaman ne kaha, "Pehle ek poverty index hota tha...ek human development index hota tha aur yeh sab ab bhi hain, lekin aaj jo sabse zyada dekha ja raha hai, woh hai VIX, jo markets ka volatility index hai." Unhone kaha ki sarkar ki stability markets ke efficient hone ke liye zaroori hai. PM Narendra Modi ka teesra term markets ko stability ke saath function karne mein madad karega, unhone joda.""" ``` - My model (Finetuned LLama model) ```python LLama model = Finance Minister Nirmala Sitharaman ne kaha, "Pehle ek poverty index hota tha... ek human development index hota tha aur sab kuch ab bhi chal raha hai, lekin aaj jo kaafi zyada dekha ja raha hai, woh VIX hai, jo markets ki volatility ka index hai." Unhone kaha ki markets ke liye sarkar ki stability zaroori hai. PM Narendra Modi ke teesre term se markets stability ke saath function karenge, unhone joda. ``` ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65187b234965add2b08b2990/Rc3nlfnSVwu1dnzfxYb-Y.png) # Uploaded model - **Developed by:** suyash2739 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
tsavage68/Summary_L3_1000steps_1e5rate_01beta_CSFTDPO
tsavage68
2024-06-21T23:21:55Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:tsavage68/Summary_L3_1000steps_1e7rate_SFT2", "base_model:finetune:tsavage68/Summary_L3_1000steps_1e7rate_SFT2", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T21:22:16Z
--- license: llama3 base_model: tsavage68/Summary_L3_1000steps_1e7rate_SFT2 tags: - trl - dpo - generated_from_trainer model-index: - name: Summary_L3_1000steps_1e5rate_01beta_CSFTDPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Summary_L3_1000steps_1e5rate_01beta_CSFTDPO This model is a fine-tuned version of [tsavage68/Summary_L3_1000steps_1e7rate_SFT2](https://huggingface.co/tsavage68/Summary_L3_1000steps_1e7rate_SFT2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5961 - Rewards/chosen: -0.8715 - Rewards/rejected: -3.9531 - Rewards/accuracies: 0.1400 - Rewards/margins: 3.0816 - Logps/rejected: -54.7948 - Logps/chosen: -18.0977 - Logits/rejected: -1.3576 - Logits/chosen: -1.3527 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.5546 | 0.2004 | 50 | 0.5961 | -0.8720 | -3.9451 | 0.1400 | 3.0730 | -54.7146 | -18.1031 | -1.3571 | -1.3522 | | 0.6585 | 0.4008 | 100 | 0.5961 | -0.8712 | -3.9495 | 0.1400 | 3.0783 | -54.7588 | -18.0949 | -1.3575 | -1.3526 | | 0.6238 | 0.6012 | 150 | 0.5961 | -0.8681 | -3.9389 | 0.1400 | 3.0707 | -54.6525 | -18.0641 | -1.3563 | -1.3514 | | 0.6065 | 0.8016 | 200 | 0.5961 | -0.8725 | -3.9499 | 0.1400 | 3.0774 | -54.7626 | -18.1074 | -1.3568 | -1.3519 | | 0.6238 | 1.0020 | 250 | 0.5961 | -0.8717 | -3.9513 | 0.1400 | 3.0796 | -54.7771 | -18.1000 | -1.3576 | -1.3527 | | 0.6238 | 1.2024 | 300 | 0.5961 | -0.8725 | -3.9481 | 0.1400 | 3.0756 | -54.7450 | -18.1078 | -1.3571 | -1.3522 | | 0.6238 | 1.4028 | 350 | 0.5961 | -0.8727 | -3.9498 | 0.1400 | 3.0771 | -54.7614 | -18.1094 | -1.3572 | -1.3523 | | 0.5718 | 1.6032 | 400 | 0.5961 | -0.8724 | -3.9505 | 0.1400 | 3.0781 | -54.7691 | -18.1072 | -1.3573 | -1.3524 | | 0.5892 | 1.8036 | 450 | 0.5961 | -0.8726 | -3.9502 | 0.1400 | 3.0776 | -54.7655 | -18.1083 | -1.3573 | -1.3523 | | 0.5718 | 2.0040 | 500 | 0.5961 | -0.8717 | -3.9446 | 0.1400 | 3.0728 | -54.7095 | -18.1001 | -1.3575 | -1.3526 | | 0.5718 | 2.2044 | 550 | 0.5961 | -0.8733 | -3.9538 | 0.1400 | 3.0805 | -54.8019 | -18.1157 | -1.3569 | -1.3521 | | 0.5545 | 2.4048 | 600 | 0.5961 | -0.8691 | -3.9509 | 0.1400 | 3.0818 | -54.7729 | -18.0740 | -1.3573 | -1.3524 | | 0.5199 | 2.6052 | 650 | 0.5961 | -0.8731 | -3.9531 | 0.1400 | 3.0800 | -54.7946 | -18.1135 | -1.3573 | -1.3524 | | 0.6238 | 2.8056 | 700 | 0.5961 | -0.8719 | -3.9544 | 0.1400 | 3.0826 | -54.8080 | -18.1013 | -1.3581 | -1.3532 | | 0.6065 | 3.0060 | 750 | 0.5961 | -0.8719 | -3.9517 | 0.1400 | 3.0798 | -54.7812 | -18.1017 | -1.3575 | -1.3526 | | 0.6412 | 3.2064 | 800 | 0.5961 | -0.8706 | -3.9530 | 0.1400 | 3.0824 | -54.7941 | -18.0886 | -1.3574 | -1.3525 | | 0.6585 | 3.4068 | 850 | 0.5961 | -0.8715 | -3.9512 | 0.1400 | 3.0798 | -54.7760 | -18.0975 | -1.3577 | -1.3529 | | 0.6238 | 3.6072 | 900 | 0.5961 | -0.8715 | -3.9512 | 0.1400 | 3.0798 | -54.7760 | -18.0975 | -1.3577 | -1.3529 | | 0.5372 | 3.8076 | 950 | 0.5961 | -0.8715 | -3.9531 | 0.1400 | 3.0816 | -54.7948 | -18.0977 | -1.3576 | -1.3527 | | 0.6238 | 4.0080 | 1000 | 0.5961 | -0.8715 | -3.9531 | 0.1400 | 3.0816 | -54.7948 | -18.0977 | -1.3576 | -1.3527 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.0.0+cu117 - Datasets 2.20.0 - Tokenizers 0.19.1
SiMajid/xlm-roberta-base
SiMajid
2024-06-21T23:05:26Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "trl", "reward-trainer", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-21T23:04:52Z
--- license: mit base_model: FacebookAI/xlm-roberta-base tags: - trl - reward-trainer - generated_from_trainer model-index: - name: xlm-roberta-base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.41e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25.0 ### Training results ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
EleutherAI/Meta-Llama-3-8B-capitals-random-standardized-many-random-names
EleutherAI
2024-06-21T22:54:17Z
11
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T21:15:07Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Areeb-02/mpnet-base-GISTEmbedLoss-MSEE_Evaluator-salestax-docs
Areeb-02
2024-06-21T22:47:54Z
10
0
sentence-transformers
[ "sentence-transformers", "safetensors", "mpnet", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:238", "loss:GISTEmbedLoss", "arxiv:1908.10084", "arxiv:2402.16829", "base_model:microsoft/mpnet-base", "base_model:finetune:microsoft/mpnet-base", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-21T22:47:22Z
--- base_model: microsoft/mpnet-base datasets: [] language: [] library_name: sentence-transformers metrics: - negative_mse pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:238 - loss:GISTEmbedLoss widget: - source_sentence: What is the timeframe for opening a physical location in the Designated Areas to be eligible for the credit under Section 960.1? sentences: - '(a) A person or combined group that opens a physical location in the Designated Areas on or after January 1, 2023 through and including December 31, 2027, shall be allowed a credit against that person or combined group''s Gross Receipts Tax if the person or 74SAN FIAT CU 2024 11AY 10 PM 1331 combined group did not have a physical location in the City for at least three years prior to DEPARTMENT OF ELECTIONS opening the physical location. The credit under this Section 960.1 shall be an annual credit for each of up to three tax years immediately following the tax year in which the person or combined group opened the physical location in the Designated Areas, provided the person or combined group maintains a physical location in the Designated Areas in the tax year that the credit is taken. To be eligible for the credit, the person or combined group must take the credit for each tax year on an original Gross Receipts Tax return filed with the Tax Collector.' - '5222 (Nondepository Credit Intermediation), 5223 (Activities Related to Credit Intermediation)), 533 (Lessors of Nonfinancial Intangible Assets (except Copyrighted Works)), 54 (Professional, Scientific, and Technical Services) but not including 541714 (Research and Development in Biotechnology (except Nanobiotechnology)), 55 (Management of Companies and 63SA CL 2024 LAY 10 PA (30 Enterprises), 562 (Waste Management and Remediation Services), 61 (Educational Services), DEPARTMENT OF ELECTI 62 (Health Care and Social Assistance), and all business activities not otherwise exempt and not elsewhere subjected to a gross receipts tax rate by Sections 953.20 through 953.26 or an administrative office tax under Section 953.8. (c) The amount of taxable gross receipts from Category 5 Business Activities subject to the gross receipts tax shall be three-quarters of the amount determined under Section 956.1 plus one-quarter of the amount determined under Section 956.2. SEC. 953.25. GROSS RECEIPTS TAX APPLICABLE TO CATEGORY 6 BUSINESS ACTIVITIES. (a) The gross receipts tax rates applicable to Category 6 Business Activities are: (1) For tax years 2025 and 2026: 1.5% for taxable gross receipts between $0 and $1,000,000 1.5% for taxable gross receipts between $1,000,000.01 and $2,500,000 3% for taxable gross receipts between $2,500,000.01 and $25,000,000 2.352% for taxable gross receipts between $25,000,000.01 and $50,000,000 3.024% for taxable gross receipts between $50,000,000.01 and $75,000,000 3.024% for taxable gross receipts between $75,000,000.01 and $100,000,000 3.36% for taxable gross receipts between $100,000,000.01 and $150,000,000 3.36% for taxable gross receipts between $150,000,000.01 and $250,000,000 3.36% for taxable gross receipts between $250,000,000.01 and $500,000,000 3.' - 'Except as otherwise provided in this Section 855 and Section 856 of this Article 12, and Section 76.3 of Article 2 of this Business and Tax Regulations Code, the annual fee for obtaining a registration certificate for the 2025-2026 registration year, payable in advance, shall be as follows: San Francisco Gross Receipts for the Immediately Preceding Tax Year $0 to $100,000 $100,000.01 to $250,000 Annual Registration Fee $41 $71 $250,000.01 to $500,000 $120 $500,000.01 to $750,000 $240 $750,000.01 to $1,000,000 $330 $1,000,000.01 to $1,500,000 $469 $1,500,000.01 to $2,000,000 $656 $2,000,000.01 to $2,500,000 $848 $2,500,000.01 to $5,000,000 $1,414 $5,000,000.01 to $7,500,000 $600 $7,500,000.01 to $15,000,000 $1,500 $15,000,000.01 to $25,000,000 $4,875 25$25,000,000.01 to $50,000,000 $50,000,000.01 to $100,000,000 $100,000,000.01 to $200,000,000 $200,000,000.01 and over SAN KAL CO 2724 11 10 11 129 $15,000 DEPARTMENT $30,000 $37,500 $45,000 (d) Fee for Registration Years Beginning On or After April 1, 2026. Except as otherwise provided in this Section 855 and Section 856 of this Article 12, and Section 76.3 of Article 2 of this Business and Tax Regulations Code, the annual fee for obtaining a registration certificate for registration years beginning on or after April 1, 2026, payable in advance, shall be as follows: San Francisco Gross Receipts for the Immediately Preceding Tax Year $0 to $100,' - source_sentence: How is the overpaid executive administrative office tax measured for individuals or combined groups, and what other administrative office taxes might they also be liable for? sentences: - '(b) Combined gross receipts are the total worldwide gross receipts of the person and all related entities to the person, unless the election provided for in California Revenue and Taxation Code Section 25110 is in effect for the person, in which case combined gross receipts shall be computed consistently with the water''s edge election, as set forth therein. (c) Combined payroll is the total worldwide compensation paid by the person and all related entities to the person, unless the election provided for in California Revenue and Taxation Code Section 25110 is in effect for the person, in which case combined payroll shall be computed consistently with the water''s edge election, as set forth therein. A person who 72SALA 50 has no combined payroll in a tax year shall have no gross receipts under this Section for that DEPARTMENT OF LECT tax year. (d) Payroll in the City is the total amount paid for compensation in the City by the person and by all related entities to the person. (e) Compensation paid in the City shall be determined as follows:set forth in Section 953.8)(4). (1) Where compensation is paid by reason of work performed or services rendered by an individual wholly within the City, all of the compensation for such individual shall be attributable to the City. (2) Where compensation is paid by reason of work performed or services rendered by an individual partly within and partly without the City, the portion of such compensation attributable to the City shall be determined as follows: (A) Except as otherwise provided in this Section 956.2(e), the portion of such compensation attributable to the City shall be the portion of such compensation which the total number of working hours employed within the City bears to the total number of working hours within and without the City. (B) If the amount of such compensation depends on the volume of business transacted by such individual, then the portion of such compensation attributable to the City shall be the portion of such compensation which the volume of business transacted by such individual in the City bears to the volume of business transacted by such individual within and without the City.' - 'If the person or combined group did not file a 7SANT V 772 MAY 10 TH 1:28 return for the preceding tax year and did not request an extension under Section 6.9-4 to file that return, the person or combined group shall be deemed to have filed a return showing no liability for purposes of this Section 6.9-Isubsection (c)(3)(C)(ii), and no estimated tax payments of that tax shall be due for the current tax year. (D) Lessor of Residential Real Estate Exemption. Notwithstanding any other provision in this Section 6.9-1(c)(3), a lessor of residential real estate, as defined in Section 954.1 of Article 12-A-1, shall not be required to make estimated tax payments under this Section 6.9-1(c)(3), but shall pay its full Gross Receipts Tax liability, Homelessness Gross Receipts Tax liability, and Overpaid Executive Gross Receipts Tax liability on or before the last day of February of the succeeding year, or the extended deadline in Section 6.9-4, if the lessor''s gross receipts within the City shown on the lessor''s return for either the current tax year or the preceding tax year did not exceed the threshold in Section 954.1(b) of Article 12-A-1. * * (g) Business Registration Certificate; Annual Due Date. (1) For registration years ending on or before June 30, 2025 and the 2025-2026 registration year, eExcept for any extensions granted under Section 6.9-4 or as otherwise provided in this Business and Tax Regulations Code, returns and payments of the Business Registration Fee (Article 12) shall be filed annually and are due and payable, and shall be delinquent if not submitted and paid to the Tax Collector, on or before the last day of May preceding the registration year commencing July 1 of that year.' - 'DEPARTMENT OF ELECTION Article 12-A-1, shall pay an annual overpaid executive administrative office tax if the Executive Pay Ratio for the tax year of that person or the combined group of which it is a part exceeds 100:1. This overpaid executive administrative office tax shall be measured by the person''s total payroll expense, as defined in Section 953.8(f) of Article 12-A-1, that is attributable to the City. If a person is a member of a combined group, then its tax shall be measured by the total payroll expense of the combined group attributable to the City. Such person or combined group shall pay only the overpaid executive administrative office tax, and not the tax imposed under other subsections of this Section 3303, but a person or combined group may be liable for the administrative office tax imposed by Section 953.8 of Article 12-A-1 and the homelessness administrative office tax imposed by Section 2804(d) or Section 2804.9 of Article 28 in addition to the overpaid executive administrative office tax imposed by this subsection (dg). Unless specified otherwise, this overpaid executive administrative office tax shall be considered part of the Overpaid Executive Gross Receipts Tax for all purposes.' - source_sentence: What are the homelessness gross receipts tax rates applicable to Category 1 Business Activities for tax years beginning on or after January 1, 2025, as outlined in Section 2804.3 of the same code? sentences: - 'For purposes of serving a notice of a deficiency determination under this Section 6.11-2, the date that a return was due shall be: (1) For tax years and tax periods ending on or before December 31, 2021, and registration years ending on or before June 30, 2023, the due date of the return without regard to any extensions under Section 6.9-4; (2) For tax years and tax periods beginning on or after January 1, 2022, and registration years beginning on or after July 1, 2023 and ending on or before March 31, 2026, except for the taxes for the years and periods subject to subsection (a)(3) of this Section 6.11-2, the due date of the return including any extensions; or (3) For the Gross Receipts Tax (Article 12-A-1) (including the tax on administrative office business activities in Section 953.8 of Article 12-A-1), the Early Care and Education Commercial Rents Tax (Article 21), the Homelessness Gross Receipts Tax (Article 28) (including the tax on administrative office business activities in Section 2804(d) or Section 2804.9 of Article 28), the Cannabis Business Tax (Article 30), and the Overpaid Executive Gross Receipts Tax (Article 33) (including the tax on administrative office business activities in Section 3303(g)) for tax years beginning on or after January 1, 2025, and for the Business Registration Fee (Article 12) for registration years beginning on or after April 1, 2026, November 30 of the calendar year in which the return for such respective tax was originally due. 13SAN CO 22 MAY (b) The notice of any determination under this Section 6.11-2 may be served upon the DEMANTHER OF CLIENT OFF taxpayer or other affected person personally or by mail; if by mail, service shall be: (1) to the last known address that appears in the Tax Collector''s records, provided there is such an address in the Tax Collector''s records, or (2) to an address that the Tax Collector concludes, in the Tax Collector''s discretion, is the last known address of the person(s).' - 'If the person or combined group did not file a 7SANT V 772 MAY 10 TH 1:28 return for the preceding tax year and did not request an extension under Section 6.9-4 to file that return, the person or combined group shall be deemed to have filed a return showing no liability for purposes of this Section 6.9-Isubsection (c)(3)(C)(ii), and no estimated tax payments of that tax shall be due for the current tax year. (D) Lessor of Residential Real Estate Exemption. Notwithstanding any other provision in this Section 6.9-1(c)(3), a lessor of residential real estate, as defined in Section 954.1 of Article 12-A-1, shall not be required to make estimated tax payments under this Section 6.9-1(c)(3), but shall pay its full Gross Receipts Tax liability, Homelessness Gross Receipts Tax liability, and Overpaid Executive Gross Receipts Tax liability on or before the last day of February of the succeeding year, or the extended deadline in Section 6.9-4, if the lessor''s gross receipts within the City shown on the lessor''s return for either the current tax year or the preceding tax year did not exceed the threshold in Section 954.1(b) of Article 12-A-1. * * (g) Business Registration Certificate; Annual Due Date. (1) For registration years ending on or before June 30, 2025 and the 2025-2026 registration year, eExcept for any extensions granted under Section 6.9-4 or as otherwise provided in this Business and Tax Regulations Code, returns and payments of the Business Registration Fee (Article 12) shall be filed annually and are due and payable, and shall be delinquent if not submitted and paid to the Tax Collector, on or before the last day of May preceding the registration year commencing July 1 of that year.' - '(a) The homelessness gross receipts tax rates applicable to Category 1 Business Activities for tax years beginning on or after January 1, 2025 are: 0% for taxable gross receipts between $0 and $1,000,000 0% for taxable gross receipts between $1,000,000.01 and $2,500,000 0% for taxable gross receipts between $2,500,000.01 and $25,000,000 0.164% for taxable gross receipts between $25,000,000.01 and $50,000,000 0.164% for taxable gross receipts between $50,000,000.01 and $75,000,000 0.164% for taxable gross receipts between $75,000,000.01 and $100,000,000 0.164% for taxable gross receipts between $100,000,000.01 and $150,000,000 0.246% for taxable gross receipts between $150,000,000.01 and $250,000,000 0.328% for taxable gross receipts between $250,000,000.01 and $500,000,000 0.41% for taxable gross receipts between $500,000,000.01 and $1,000,000,000 0.492% for taxable gross receipts over $1,000,000,000 (b) "Category 1 Business Activities" has the same meaning as in Section 953.20(b) of Article 12-A-1 of this Business and Tax Regulations Code. (c) The amount of taxable gross receipts from Category 1 Business Activities subject to the homelessness gross receipts tax shall be three-quarters of the amount determined under Section 956.1 of Article 12-A-1 plus one-quarter of the amount determined under Section 956.2 of Article 12-A-1. 84SAL SEC. 2804.3. HOMELESSNESS GROSS RECEIPTS TAX APPLICABLE TO CATEGORY 2 BUSINESS ACTIVITIES.' - source_sentence: What are the different gross receipts tax rates applicable to private education and health services, administrative and support services, and miscellaneous business activities as outlined in Section 953.4? sentences: - '(b) For only so long as and to the extent that the City is prohibited from imposing the Homelessness Gross Receipts Tax, any person upon whom the City is prohibited under the Constitution or laws of the State of California or the Constitution or laws of the United States from imposing the Homelessness Gross Receipts Tax shall be exempt from the Homelessness Gross Receipts Tax. (c) For purposes of this Article 28, gross receipts shall not include receipts that are excluded from gross receipts for purposes of the gross receipts tax imposed by Article 12-A-1, and also shall not include receipts subject to a gross receipts tax on commercial rents imposed as a result of a measure adopted by the voters of San Francisco in the June 5, 2018 election. (d) This Section 2805 shall apply to tax years ending on or before December 31, 2024. SEC. 2805.3. EXEMPTIONS AND EXCLUSIONS. (a) An organization that is exempt from income taxation by Chapter 4 (commencing with Section 23701) of Part 11 of Division 2 of the California Revenue and Taxation Code or Subchapter F (commencing with Section 501) of Chapter 1 of Subtitle A of the Internal Revenue Code of 1986, as amended, as qualified by Sections 502, 503, 504, and 508 of the Internal Revenue Code of 1986, as amended, shall be exempt from taxation under this Article 28, only so long as those exemptions continue to exist under state or federal law. (b) For purposes of this Article 28, gross receipts as defined in Section 952.3 shall not include receipts from business activities if, and only so long as and to the extent that, the City is prohibited 93SA 01:32 from taxing such receipts under the Constitution or laws of the United States or under the Constitution or laws of the State of California. (c) Rent Controlled Buildings Exclusion. A person subject to the homelessness gross receipts tax may exclude from gross receipts in any tax year 50% of the total amount received from the rental of real property to tenants in occupancy at any location in the City, which is subject to limits on rent increases pursuant to the Residential Rent Stabilization and Arbitration Ordinance, Administrative Code, Chapter 37, Section 37.1 et seq.' - '10 that the 90% gross receipts threshold has been met for tax year 2023, and for tax year 2024 if the Controller does not certify under Section 953.10 that the 95% gross receipts threshold has been met for tax year 2024: 42SAN LANCI Co FILED 2024 MAY 10 PH 1:29 0.45% (e.g. $4.50 per $1,000) for taxable gross receipts between $0 and $1,000,000 DEPARTMENT OF ELECTIONS 0.488% (e.g. $4.88 per $1,000) for taxable gross receipts between $1,000,000.01 and $2,500,000 0.488% (e.g. $4.88 per $1,000) for taxable gross receipts between $2,500,000.01 and $25,000,000 0.6% (e.g. $6 per $1,000) for taxable gross receipts over $25,000,000 (D) For tax year 2024 if the Controller certifies under Section 953.10 that the 95% gross receipts threshold has been met for tax year 2024, and for tax years beginning on or after January 1, 2025: 0.465% (e.g. $4.65 per $1,000) for taxable gross receipts between $0 and $1,000,000 0.504% (e.g. $5.04 per $1,000) for taxable gross receipts between $1,000,000.01 and $2,500,000 0.504% (e.g. $5.04 per $1,000) for taxable gross receipts between $2,500,000.01 and $25,000,000 0.62% (e.g. $6.20 per $1,000) for taxable gross receipts over $25,000,000 SEC. 953.4. GROSS RECEIPTS TAX APPLICABLE TO PRIVATE EDUCATION AND HEALTH SERVICES; ADMINISTRATIVE AND SUPPORT SERVICES; AND MISCELLANEOUS BUSINESS ACTIVITIES. (a) The gross receipts tax rates applicable to the business activities of private education and health services, administrative and support services,' - 'SARI до 202 BAY 10 FA 1:32 year by 52, and dividing the result by the number of weeks that the employee was employed by that person or combined group during the tax year. "Highest-Paid Managerial Employee" means the individual employee or officer of a person or combined group with managerial responsibility in a business function who received the most Compensation for a tax year. For purposes of determining the Highest-Paid Managerial Employee and the Compensation of such employee, Compensation shall not be annualized or converted to a full-time equivalency. SEC. 3303. IMPOSITION OF TAX. (a) Except as otherwise provided in this Article 33, commencing with tax years beginning on or after January 1, 2022, for the privilege of engaging in business in the City, the City imposes an annual Overpaid Executive Gross Receipts Tax on each person engaging in business within the City where the Executive Pay Ratio for the tax year of that person or the combined group of which it is a part exceeds 100:1.' - source_sentence: Based on the context information provided, what are the different gross receipts tax rates for businesses in San Francisco for tax years 2022, 2023, and 2024? sentences: - '$9.75 per $1,000) for taxable gross receipts over $25,000,000 44SANCO 2024 NAY LO (D) For tax year 2024 if the Controller certifies under Section 953.10 that the DEPARTMENT OF 95% gross receipts threshold has been met for tax year 2024, and for tax years beginning on or after January 1, 2025: 0.814% (e.g. $8.14 per $1,000) for taxable gross receipts between $0 and $1,000,000 0.853% (e.g. $8.53 per $1,000) for taxable gross receipts between $1,000,000.01 and $2,500,000 0.93% (e.g. $9.30 per $1,000) for taxable gross receipts between $2,500,000.01 and $25,000,000 1.008% (e.g. $10.08 per $1,000) for taxable gross receipts over $25,000,000 (3) For all business activities not otherwise exempt and not elsewhere subjected to a gross receipts tax rate or an administrative office tax by this Article 12-A-1: (B) For tax years 2022 and, if the Controller does not certify under Section 953.10 that the 90% gross receipts threshold has been met for tax year 2023, for tax year 2023: 0.788% (e.g. $7.88 per $1,000) for taxable gross receipts between $0 and $1,000,000 0.825% (e.g. $8.25 per $1,000) for taxable gross receipts between $1,000,000.01 and $2,500,000 0.9% (e.g. $9 per $1,000) for taxable gross receipts between $2,500,000.01 and $25,000,000 0.975% (e.g. $9.75 per $1,000) for taxable gross receipts over $25,000,000 (C) For tax year 2023 if the Controller certifies under Section 953.10 that the 90% gross receipts threshold has been met for tax year 2023,' - '(d) In no event shall the credit under this Section 960.4 reduce a person or combined group''s Gross Receipts Tax liability to less than $0 for any tax year. The credit under this Section shall not be refundable and may not be carried forward to a subsequent year. SEC. 966. CONTROLLER REPORTS. The Controller shall prepare reports by September 1, 2026, and September 1, 2027, respectively, that discuss current economic conditions in the City and the performance of the tax system revised by the voters in the ordinance adding this Section 966. Section 6. Article 21 of the Business and Tax Regulations Code is hereby amended by revising Section 2106 to read as follows: SEC. 2106. SMALL BUSINESS EXEMPTION. (a) For tax years ending on or before December 31, 2024, nNotwithstanding any other provision of this Article 21, a person or combined group exempt from payment of the gross receipts tax under Section 954.1 of Article 12-A-1, as amended from time to time, shall also be exempt from payment of the Early Care and Education Commercial Rents Tax. 79SAN DL W(b) For tax years beginning on or after January 1, 2025, notwithstanding any other provision of this Article 21, a "small business enterprise" shall be exempt from payment of the Early Care and Education Commercial Rents Tax. For purposes of this subsection (b), the term "small business enterprise" shall mean any person or combined group whose gross receipts within the City, determined under Article 12-A-1, did not exceed $2,325,000, adjusted annually in accordance with the increase in the Consumer Price Index: All Urban Consumers for the San Francisco/Oakland/Hayward Area for All Items as reported by the United States Bureau of Labor Statistics, or any successor to that index, as of December 31 of the calendar year two years prior to the tax year, beginning with tax year 2026, and rounded to the nearest $10,000. This subsection (b) shall not apply to a person or combined group subject to a tax on administrative office business activities in Section 953.8 of Article 12-A-1. Section 7.' - '(1) For tax years ending on or before December 31, 2024, the overpaid executive administrative office tax shall be calculated as follows: (+4) 0.4% of the person or combined group''s total payroll expense attributable to the City for a tax year if the person or combined group has an Executive Pay Ratio for that tax year of greater than 100:1, but less than or equal to 200:1; (2B) 0.8% of the person or combined group''s total payroll expense attributable to the City for a tax year if the person or combined group has an Executive Pay Ratio for that tax year of greater than 200:1, but less than or equal to 300:1; (3C) 1.2% of the person or combined group''s total payroll expense attributable to the City for a tax year if the person or combined group has an Executive Pay Ratio for that tax year of greater than 300:1, but less than or equal to 400:1; 103SAULAMIC 50 10 PM 1:32 (4D) 1.6% of the person or combined group''s total payroll expense DEPARTMENT OF ELECTIONS attributable to the City for a tax year if the person or combined group has an Executive Pay Ratio for that tax year of greater than 400:1, but less than or equal to 500:1; (5E) 2% of the person or combined group''s total payroll expense attributable to the City for a tax year if the person or combined group has an Executive Pay Ratio for that tax year of greater than 500:1, but less than or equal to 600:1; or (6F) 2.4% of the person or combined group''s total payroll expense attributable to the City for a tax year if the person or combined group has an Executive Pay Ratio for that tax year of greater than 600:1.' model-index: - name: SentenceTransformer based on microsoft/mpnet-base results: - task: type: knowledge-distillation name: Knowledge Distillation dataset: name: stsb dev type: stsb-dev metrics: - type: negative_mse value: -2.4281997233629227 name: Negative Mse --- # SentenceTransformer based on microsoft/mpnet-base This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) <!-- at revision 6996ce1e91bd2a9c7d7f61daec37463394f73f09 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Areeb-02/mpnet-base-GISTEmbedLoss-MSEE_Evaluator-salestax-docs") # Run inference sentences = [ 'Based on the context information provided, what are the different gross receipts tax rates for businesses in San Francisco for tax years 2022, 2023, and 2024?', '$9.75 per $1,000) for taxable gross receipts over $25,000,000\n44SANCO\n2024 NAY LO\n(D) For tax year 2024 if the Controller certifies under Section 953.10 that the\nDEPARTMENT OF\n95% gross receipts threshold has been met for tax year 2024, and for tax years beginning on or after\nJanuary 1, 2025:\n0.814% (e.g. $8.14 per $1,000) for taxable gross receipts between $0 and $1,000,000\n0.853% (e.g. $8.53 per $1,000) for taxable gross receipts between $1,000,000.01 and\n$2,500,000\n0.93% (e.g. $9.30 per $1,000) for taxable gross receipts between $2,500,000.01 and\n$25,000,000\n1.008% (e.g. $10.08 per $1,000) for taxable gross receipts over $25,000,000\n(3) For all business activities not otherwise exempt and not elsewhere\nsubjected to a gross receipts tax rate or an administrative office tax by this Article 12-A-1:\n(B) For tax years 2022 and, if the Controller does not certify under\nSection 953.10 that the 90% gross receipts threshold has been met for tax year 2023, for tax\nyear 2023:\n0.788% (e.g. $7.88 per $1,000) for taxable gross receipts between $0 and $1,000,000\n0.825% (e.g. $8.25 per $1,000) for taxable gross receipts between $1,000,000.01 and\n$2,500,000\n0.9% (e.g. $9 per $1,000) for taxable gross receipts between $2,500,000.01 and\n$25,000,000\n0.975% (e.g. $9.75 per $1,000) for taxable gross receipts over $25,000,000\n(C) For tax year 2023 if the Controller certifies under Section 953.10 that the\n90% gross receipts threshold has been met for tax year 2023,', '(d) In no event shall the credit under this Section 960.4 reduce a person or combined group\'s\nGross Receipts Tax liability to less than $0 for any tax year. The credit under this Section shall not be\nrefundable and may not be carried forward to a subsequent year.\nSEC. 966. CONTROLLER REPORTS.\nThe Controller shall prepare reports by September 1, 2026, and September 1, 2027,\nrespectively, that discuss current economic conditions in the City and the performance of the tax system\nrevised by the voters in the ordinance adding this Section 966.\nSection 6. Article 21 of the Business and Tax Regulations Code is hereby amended by\nrevising Section 2106 to read as follows:\nSEC. 2106. SMALL BUSINESS EXEMPTION.\n(a) For tax years ending on or before December 31, 2024, nNotwithstanding any other\nprovision of this Article 21, a person or combined group exempt from payment of the gross\nreceipts tax under Section 954.1 of Article 12-A-1, as amended from time to time, shall also\nbe exempt from payment of the Early Care and Education Commercial Rents Tax.\n79SAN\nDL W(b) For tax years beginning on or after January 1, 2025, notwithstanding any other provision\nof this Article 21, a "small business enterprise" shall be exempt from payment of the Early Care and\nEducation Commercial Rents Tax. For purposes of this subsection (b), the term "small business\nenterprise" shall mean any person or combined group whose gross receipts within the City, determined\nunder Article 12-A-1, did not exceed $2,325,000, adjusted annually in accordance with the increase in\nthe Consumer Price Index: All Urban Consumers for the San Francisco/Oakland/Hayward Area for All\nItems as reported by the United States Bureau of Labor Statistics, or any successor to that index, as of\nDecember 31 of the calendar year two years prior to the tax year, beginning with tax year 2026, and\nrounded to the nearest $10,000. This subsection (b) shall not apply to a person or combined group\nsubject to a tax on administrative office business activities in Section 953.8 of Article 12-A-1.\nSection 7.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Knowledge Distillation * Dataset: `stsb-dev` * Evaluated with [<code>MSEEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.MSEEvaluator) | Metric | Value | |:-----------------|:------------| | **negative_mse** | **-2.4282** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 238 training samples * Columns: <code>sentence1</code> and <code>sentence2</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 41.95 tokens</li><li>max: 219 tokens</li></ul> | <ul><li>min: 63 tokens</li><li>mean: 426.3 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | sentence1 | sentence2 | |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>What types of businesses are subject to the gross receipts tax in San Francisco, and how is their San Francisco gross receipts calculated? What are the current rates for this tax, and are there any exemptions or scheduled increases?</code> | <code>The Way It Is Now<br>CHANGES TO BUSINESS TAXES<br>The City collects various business taxes on an annual basis including:<br>O<br>•<br>SAN FRANCISCO<br>FILED<br>2024 MAY 15 PM 3:10<br>DEPARTMENT OF ELECTIONS<br>A gross receipts tax that is a percentage of a business's San Francisco gross receipts.<br>Depending on business type, the City determines a business's San Francisco gross<br>receipts based on sales in San Francisco, payroll expenses for employees working there,<br>or both. Rates range from 0.053% to 1.008% and are scheduled to increase in coming<br>years. Rates depend on business type, and higher rates apply as a business generates<br>more gross receipts. For 2023, most businesses with gross receipts up to $2.19 million<br>are exempt.<br>A homelessness gross receipts tax that is an additional tax on businesses with San<br>Francisco gross receipts over $50 million. Rates range from 0.175% to 0.69%.<br>An overpaid executive gross receipts tax that is an additional tax on businesses that pay<br>their highest-paid managerial employee much higher than the median compensation they<br>pay their San Francisco employees. Rates are between 0.1% and 0.6%.<br>A business registration fee that is an additional tax. For most businesses the fee is<br>currently between $47 and $45,150, based on business type and amount of gross receipts.<br>• An administrative office tax on payroll expenses that certain large businesses pay instead<br>of these other business taxes. The combined rates in 2024 range from 3.04% to 5.44%,<br>and in 2025 are scheduled to range from 3.11% to 5.51%. Business registration fees for<br>these businesses currently range from $19,682 to $45,928.<br>State law limits the total revenue, including tax revenue, the City may spend each year. The<br>voters may approve increases to this limit for up to four years.</code> | | <code>What is the homelessness gross receipts tax, and which businesses are required to pay it? What are the current rates for this tax, and how do they vary based on the amount of San Francisco gross receipts? Are there any exemptions or scheduled increases for this tax?</code> | <code>The Way It Is Now<br>CHANGES TO BUSINESS TAXES<br>The City collects various business taxes on an annual basis including:<br>O<br>•<br>SAN FRANCISCO<br>FILED<br>2024 MAY 15 PM 3:10<br>DEPARTMENT OF ELECTIONS<br>A gross receipts tax that is a percentage of a business's San Francisco gross receipts.<br>Depending on business type, the City determines a business's San Francisco gross<br>receipts based on sales in San Francisco, payroll expenses for employees working there,<br>or both. Rates range from 0.053% to 1.008% and are scheduled to increase in coming<br>years. Rates depend on business type, and higher rates apply as a business generates<br>more gross receipts. For 2023, most businesses with gross receipts up to $2.19 million<br>are exempt.<br>A homelessness gross receipts tax that is an additional tax on businesses with San<br>Francisco gross receipts over $50 million. Rates range from 0.175% to 0.69%.<br>An overpaid executive gross receipts tax that is an additional tax on businesses that pay<br>their highest-paid managerial employee much higher than the median compensation they<br>pay their San Francisco employees. Rates are between 0.1% and 0.6%.<br>A business registration fee that is an additional tax. For most businesses the fee is<br>currently between $47 and $45,150, based on business type and amount of gross receipts.<br>• An administrative office tax on payroll expenses that certain large businesses pay instead<br>of these other business taxes. The combined rates in 2024 range from 3.04% to 5.44%,<br>and in 2025 are scheduled to range from 3.11% to 5.51%. Business registration fees for<br>these businesses currently range from $19,682 to $45,928.<br>State law limits the total revenue, including tax revenue, the City may spend each year. The<br>voters may approve increases to this limit for up to four years.</code> | | <code>What is the proposed measure that voters may approve to change the City's business taxes in San Francisco?</code> | <code>The<br>voters may approve increases to this limit for up to four years.<br>The Proposal<br>The proposed measure would change the City's business taxes to:<br>•<br>For the gross receipts tax:<br>○ recategorize business types, reducing the number from 14 to seven;<br>determine San Francisco gross receipts for some businesses based less on payroll<br>expenses and more on sales;<br>o change rates to between 0.1% and 3.716%; and<br>exempt most businesses with gross receipts up to $5 million (increased by<br>inflation).<br>Apply the homelessness gross receipts tax on business activities with San Francisco gross<br>receipts over $25 million, at rates between 0.162% and 1.64%.<br>Modify how the City calculates the overpaid executive gross receipts tax and who pays<br>that tax, and set rates between 0.02% and 0.129%.<br>Adjust business registration fees to between $55 and $60,000 (increased by inflation).Adjust the administrative office tax rates for certain large businesses to range from 2.97%<br>to 3.694%, and the business registration fees for these taxpayers to between $500 and<br>$35,000 (increased by inflation).<br>Make administrative and other changes to the City's business taxes.<br>The homelessness gross receipts tax would continue to fund services for people experiencing<br>homelessness and homelessness prevention. The City would use the other taxes for general<br>government purposes.<br>All these taxes would apply indefinitely until repealed.<br>This proposal would increase the City's spending limit for four years.SALITA CO<br>2024 MAY 10 PH 1:27<br>DEPARTMENT OF ELECTI<br>"Local Small Business Tax Cut Ordinance"<br>Be it ordained by the People of the City and County of San Francisco:<br>NOTE:<br>Unchanged Code text and uncodified text are in plain font.<br>Additions to Codes are in single-underline italics Times New Roman font.<br>Deletions to Codes are in strikethrough italics Times New Roman font.<br>Asterisks (* * * *) indicate the omission of unchanged Code<br>subsections or parts of tables.<br>Section 1. Title. This initiative is known and may be referred to as the "Local Small<br>Business Tax Cut Ordinance."<br>Section 2. Article 2 of the Business and Tax Regulations Code is hereby amended by<br>revising Section 76.3 to read as follows:<br>SEC. 76.3.</code> | * Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters: ```json {'guide': SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ), 'temperature': 0.01} ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | stsb-dev_negative_mse | |:-----:|:----:|:---------------------:| | 0 | 0 | -2.4282 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.41.2 - PyTorch: 2.3.0+cu121 - Accelerate: 0.31.0 - Datasets: 2.20.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### GISTEmbedLoss ```bibtex @misc{solatorio2024gistembed, title={GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning}, author={Aivin V. Solatorio}, year={2024}, eprint={2402.16829}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
1231czx/2b_1_nll_dpo_iter2_200step
1231czx
2024-06-21T22:37:56Z
6
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T22:35:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fumiyau/gpt2-001-small
fumiyau
2024-06-21T22:28:58Z
7
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "en", "dataset:Salesforce/wikitext", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T12:06:55Z
--- language: - en library_name: transformers datasets: - Salesforce/wikitext widget: - text: The University of Tokyo is example_title: Explanation-001 - text: George Washington was example_title: Explanation-002 - text: 'On a shelf, there are three books: a gray book, a red book,' example_title: StoryCompletion-001 - text: 'Once upon a time, ' example_title: Poem-001 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bmahak2005/vit-base-patch16-224-finetuned-flower
bmahak2005
2024-06-21T21:58:29Z
6
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-06-21T21:46:59Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder model-index: - name: vit-base-patch16-224-finetuned-flower results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-flower This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 2.3.0+cu121 - Datasets 2.7.1 - Tokenizers 0.13.3
async0x42/magnum-72b-v1-exl2_3.75bpw
async0x42
2024-06-21T21:30:45Z
5
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "conversational", "en", "zh", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-06-21T16:06:35Z
--- license: other license_name: tongyi-qianwen license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE language: - en - zh pipeline_tag: text-generation tags: - chat --- ![](https://files.catbox.moe/ngqnb1.png) This is the first in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of [Qwen-2 72B Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct). ## Prompting Model has been Instruct tuned with the ChatML formatting. A typical input would look like this: ```py """<|im_start|>user Hi there!<|im_end|> <|im_start|>assistant Nice to meet you!<|im_end|> <|im_start|>user Can I ask a question?<|im_end|> <|im_start|>assistant """ ``` ## Credits This model has been a team effort, credits go to: - [Sao10K](https://huggingface.co/Sao10K) for help with (and cleaning up!) the dataset. - [alpindale](https://huggingface.co/alpindale) for the training. - [kalomaze](https://huggingface.co/kalomaze) for helping with the hyperparameter tuning. - Various other people for their continued help as we tuned the parameters, restarted failed runs. In no particular order: [Doctor Shotgun](https://huggingface.co/Doctor-Shotgun), [Lucy](https://huggingface.co/lucyknada), [Nopm](https://huggingface.co/nopm), [Mango](https://huggingface.co/MangoMango69420), and the rest of the Silly Tilly. And last but not least, we'd like to thank [Kearm](https://twitter.com/Nottlespike) for sponsoring the compute needed to train this model. ## Training The training was done with 55 million tokens of high-quality RP data, over 1.5 epochs. We used 8x [AMD Instinct™ MI300X Accelerators](https://www.amd.com/en/products/accelerators/instinct/mi300/mi300x.html) for the full-parameter fine-tuning of the model. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) ## Safety ...
alielfilali01/Q2AW1M-1001
alielfilali01
2024-06-21T21:27:19Z
2,912
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T21:21:36Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vutuka/llama-3-8b-african-aya-f16
vutuka
2024-06-21T21:25:44Z
43
2
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-27T03:48:14Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** vutuka - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit ## LLama CPP ```py from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="vutuka/llama-3-8b-african-aya-f16", filename="*F16.gguf", verbose=False, ) output = llm( "Instruction: Ni nchi zipi zilizo tajiri zaidi duniani ?\n Response: ", # Prompt max_tokens=None, echo=True ) print(output) ``` This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ishikaibm/mod_dep_fl-PEFT_mix-instruct
ishikaibm
2024-06-21T21:22:39Z
5
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:EleutherAI/gpt-neo-125m", "base_model:adapter:EleutherAI/gpt-neo-125m", "license:mit", "region:us" ]
null
2024-06-21T19:06:36Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: EleutherAI/gpt-neo-125m metrics: - accuracy model-index: - name: mod_dep_fl-PEFT_mix-instruct results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mod_dep_fl-PEFT_mix-instruct This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5502 - Accuracy: -1701.4273 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.005 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:----------:| | 4.5917 | 0.9524 | 10 | 2.8083 | -1780.3744 | | 3.7289 | 2.0 | 21 | 2.6099 | -1691.0399 | | 3.548 | 2.9524 | 31 | 2.5788 | -1751.4823 | | 3.5153 | 4.0 | 42 | 2.5515 | -1766.1752 | | 3.4735 | 4.7619 | 50 | 2.5502 | -1701.4273 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
bigstorm/dolphin-2.9.2-qwen2-72b-6.0bpw-exl2
bigstorm
2024-06-21T21:19:30Z
8
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "axolotl", "exl2", "exllamasv2", "conversational", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:microsoft/orca-math-word-problems-200k", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "base_model:Qwen/Qwen2-72B", "base_model:quantized:Qwen/Qwen2-72B", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "region:us" ]
text-generation
2024-06-21T12:42:31Z
--- license: other license_name: tongyi-qianwen license_link: https://huggingface.co/Qwen/Qwen1.5-110B/blob/main/LICENSE base_model: Qwen/Qwen2-72B tags: - generated_from_trainer - axolotl - exl2 - exllamasv2 datasets: - cognitivecomputations/Dolphin-2.9 - teknium/OpenHermes-2.5 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - microsoft/orca-math-word-problems-200k - Locutusque/function-calling-chatml - internlm/Agent-FLAN --- # Exl2 Quantization 6.0BPW This model fits comfortably within 72gb of VRAM with 32k context. It was created after inference/quant bug was repaired. - 6 head bits - 6.0 bpw target Enjoy! Feel free to reach out for other quants, or BPW levels. # Dolphin 2.9.2 Qwen2 72B 🐬 Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations [![Discord](https://img.shields.io/discord/1156064224225808488?logo=Discord&logoColor=%23ffffff&label=Discord&link=https%3A%2F%2Fdiscord.gg%2FtCMkMDDHwm)](https://discord.gg/cognitivecomputations) Discord: https://discord.gg/cognitivecomputations <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> Our appreciation for the sponsors of Dolphin 2.9.2: - [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xH100 node This model is based on Qwen2-72b, and is governed by [tongyi-qianwen license](LICENSE) The base model has 128k context, and the full-weight fine-tuning was with 8k sequence length. This model was trained FFT on parameters selected by [Laser Scanner](https://github.com/cognitivecomputations/laserRMT/blob/main/laser_scanner.py), using ChatML prompt template format. example: ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` Dolphin-2.9.2 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling. Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. Dolphin is licensed according to Qwen's tongyi-qianwen license. We grant permission for any use, including commercial, that falls within accordance with said license. Dolphin was trained on data generated from GPT4, among other models. ## Evals ![image/png](https://i.ibb.co/B4x1Ddr/file-2ao0fl-K2-B2-Hmka-Epd0ja-QY0x.webp) [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: Qwen/Qwen2-72B model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer trust_remote_code: true # load_in_8bit: true # load_in_4bit: false # strict: false datasets: - path: /workspace/datasets/dolphin-2.9.2/dolphin201-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.2/dolphin-coder-codegen-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.2/dolphin-coder-translate-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.2/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.2/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.2/not_samantha_norefusals.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.2/openhermes200k_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.2/Orca-Math-resort-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.2/SystemChat_sharegpt.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.2/toolbench_instruct_j1s1_3k_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.2/toolbench_negative_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.2/toolbench_react_10p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.2/toolbench_tflan_cot_30p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.2/agent_instruct_react_unfiltered.jsonl type: sharegpt conversation: chatml unfrozen_parameters: - ^lm_head.weight$ - ^model.embed_tokens.weight$ # mlp.down_proj layers - model.layers.62.mlp.down_proj - model.layers.63.mlp.down_proj - model.layers.66.mlp.down_proj - model.layers.65.mlp.down_proj - model.layers.64.mlp.down_proj - model.layers.67.mlp.down_proj - model.layers.68.mlp.down_proj - model.layers.60.mlp.down_proj - model.layers.31.mlp.down_proj - model.layers.69.mlp.down_proj - model.layers.61.mlp.down_proj - model.layers.59.mlp.down_proj - model.layers.70.mlp.down_proj - model.layers.30.mlp.down_proj - model.layers.76.mlp.down_proj - model.layers.72.mlp.down_proj - model.layers.77.mlp.down_proj - model.layers.71.mlp.down_proj - model.layers.29.mlp.down_proj - model.layers.58.mlp.down_proj - model.layers.75.mlp.down_proj - model.layers.32.mlp.down_proj - model.layers.56.mlp.down_proj - model.layers.28.mlp.down_proj - model.layers.26.mlp.down_proj - model.layers.33.mlp.down_proj - model.layers.34.mlp.down_proj - model.layers.57.mlp.down_proj - model.layers.27.mlp.down_proj - model.layers.25.mlp.down_proj - model.layers.35.mlp.down_proj - model.layers.73.mlp.down_proj - model.layers.24.mlp.down_proj - model.layers.78.mlp.down_proj - model.layers.74.mlp.down_proj - model.layers.54.mlp.down_proj # mlp.gate_proj layers - model.layers.78.mlp.gate_proj - model.layers.77.mlp.gate_proj - model.layers.76.mlp.gate_proj - model.layers.79.mlp.gate_proj - model.layers.75.mlp.gate_proj - model.layers.74.mlp.gate_proj - model.layers.73.mlp.gate_proj - model.layers.70.mlp.gate_proj - model.layers.72.mlp.gate_proj - model.layers.71.mlp.gate_proj - model.layers.69.mlp.gate_proj - model.layers.54.mlp.gate_proj - model.layers.68.mlp.gate_proj - model.layers.57.mlp.gate_proj - model.layers.63.mlp.gate_proj - model.layers.49.mlp.gate_proj - model.layers.55.mlp.gate_proj - model.layers.53.mlp.gate_proj - model.layers.44.mlp.gate_proj - model.layers.46.mlp.gate_proj - model.layers.67.mlp.gate_proj - model.layers.58.mlp.gate_proj - model.layers.56.mlp.gate_proj - model.layers.45.mlp.gate_proj - model.layers.50.mlp.gate_proj - model.layers.62.mlp.gate_proj - model.layers.64.mlp.gate_proj - model.layers.48.mlp.gate_proj - model.layers.66.mlp.gate_proj - model.layers.52.mlp.gate_proj - model.layers.40.mlp.gate_proj - model.layers.47.mlp.gate_proj - model.layers.43.mlp.gate_proj - model.layers.65.mlp.gate_proj - model.layers.61.mlp.gate_proj - model.layers.59.mlp.gate_proj # mlp.up_proj layers - model.layers.69.mlp.up_proj - model.layers.70.mlp.up_proj - model.layers.71.mlp.up_proj - model.layers.68.mlp.up_proj - model.layers.67.mlp.up_proj - model.layers.66.mlp.up_proj - model.layers.46.mlp.up_proj - model.layers.63.mlp.up_proj - model.layers.72.mlp.up_proj - model.layers.64.mlp.up_proj - model.layers.62.mlp.up_proj - model.layers.45.mlp.up_proj - model.layers.65.mlp.up_proj - model.layers.73.mlp.up_proj - model.layers.47.mlp.up_proj - model.layers.44.mlp.up_proj - model.layers.49.mlp.up_proj - model.layers.48.mlp.up_proj - model.layers.53.mlp.up_proj - model.layers.74.mlp.up_proj - model.layers.75.mlp.up_proj - model.layers.57.mlp.up_proj - model.layers.76.mlp.up_proj - model.layers.43.mlp.up_proj - model.layers.42.mlp.up_proj - model.layers.61.mlp.up_proj - model.layers.40.mlp.up_proj - model.layers.56.mlp.up_proj - model.layers.60.mlp.up_proj - model.layers.31.mlp.up_proj - model.layers.54.mlp.up_proj - model.layers.55.mlp.up_proj - model.layers.32.mlp.up_proj - model.layers.41.mlp.up_proj - model.layers.33.mlp.up_proj - model.layers.58.mlp.up_proj # self_attn.k_proj layers - model.layers.79.self_attn.k_proj - model.layers.36.self_attn.k_proj - model.layers.35.self_attn.k_proj - model.layers.74.self_attn.k_proj - model.layers.34.self_attn.k_proj - model.layers.78.self_attn.k_proj - model.layers.77.self_attn.k_proj - model.layers.37.self_attn.k_proj - model.layers.39.self_attn.k_proj - model.layers.41.self_attn.k_proj - model.layers.38.self_attn.k_proj - model.layers.33.self_attn.k_proj - model.layers.69.self_attn.k_proj - model.layers.42.self_attn.k_proj - model.layers.32.self_attn.k_proj - model.layers.25.self_attn.k_proj - model.layers.70.self_attn.k_proj - model.layers.22.self_attn.k_proj - model.layers.63.self_attn.k_proj - model.layers.29.self_attn.k_proj - model.layers.68.self_attn.k_proj - model.layers.24.self_attn.k_proj - model.layers.30.self_attn.k_proj - model.layers.66.self_attn.k_proj - model.layers.31.self_attn.k_proj - model.layers.23.self_attn.k_proj - model.layers.65.self_attn.k_proj - model.layers.57.self_attn.k_proj - model.layers.28.self_attn.k_proj - model.layers.64.self_attn.k_proj - model.layers.44.self_attn.k_proj - model.layers.27.self_attn.k_proj - model.layers.75.self_attn.k_proj - model.layers.40.self_attn.k_proj - model.layers.26.self_attn.k_proj - model.layers.61.self_attn.k_proj # self_attn.o_proj layers - model.layers.14.self_attn.o_proj - model.layers.39.self_attn.o_proj - model.layers.19.self_attn.o_proj - model.layers.16.self_attn.o_proj - model.layers.17.self_attn.o_proj - model.layers.15.self_attn.o_proj - model.layers.69.self_attn.o_proj - model.layers.12.self_attn.o_proj - model.layers.42.self_attn.o_proj - model.layers.23.self_attn.o_proj - model.layers.22.self_attn.o_proj - model.layers.29.self_attn.o_proj - model.layers.13.self_attn.o_proj - model.layers.46.self_attn.o_proj - model.layers.52.self_attn.o_proj - model.layers.26.self_attn.o_proj - model.layers.38.self_attn.o_proj - model.layers.41.self_attn.o_proj - model.layers.18.self_attn.o_proj - model.layers.49.self_attn.o_proj - model.layers.11.self_attn.o_proj - model.layers.28.self_attn.o_proj - model.layers.25.self_attn.o_proj - model.layers.47.self_attn.o_proj - model.layers.53.self_attn.o_proj - model.layers.27.self_attn.o_proj - model.layers.37.self_attn.o_proj - model.layers.20.self_attn.o_proj - model.layers.43.self_attn.o_proj - model.layers.44.self_attn.o_proj - model.layers.45.self_attn.o_proj - model.layers.30.self_attn.o_proj - model.layers.24.self_attn.o_proj - model.layers.21.self_attn.o_proj - model.layers.10.self_attn.o_proj - model.layers.3.self_attn.o_proj # self_attn.q_proj layers - model.layers.1.self_attn.q_proj - model.layers.2.self_attn.q_proj - model.layers.3.self_attn.q_proj - model.layers.5.self_attn.q_proj - model.layers.4.self_attn.q_proj - model.layers.0.self_attn.q_proj - model.layers.6.self_attn.q_proj - model.layers.8.self_attn.q_proj - model.layers.7.self_attn.q_proj - model.layers.9.self_attn.q_proj - model.layers.10.self_attn.q_proj - model.layers.12.self_attn.q_proj - model.layers.19.self_attn.q_proj - model.layers.18.self_attn.q_proj - model.layers.25.self_attn.q_proj - model.layers.11.self_attn.q_proj - model.layers.15.self_attn.q_proj - model.layers.61.self_attn.q_proj - model.layers.17.self_attn.q_proj - model.layers.55.self_attn.q_proj - model.layers.54.self_attn.q_proj - model.layers.16.self_attn.q_proj - model.layers.68.self_attn.q_proj - model.layers.49.self_attn.q_proj - model.layers.48.self_attn.q_proj - model.layers.52.self_attn.q_proj - model.layers.13.self_attn.q_proj - model.layers.42.self_attn.q_proj - model.layers.57.self_attn.q_proj - model.layers.60.self_attn.q_proj - model.layers.53.self_attn.q_proj - model.layers.64.self_attn.q_proj - model.layers.66.self_attn.q_proj - model.layers.62.self_attn.q_proj - model.layers.59.self_attn.q_proj - model.layers.50.self_attn.q_proj # self_attn.v_proj layers - model.layers.15.self_attn.v_proj - model.layers.16.self_attn.v_proj - model.layers.23.self_attn.v_proj - model.layers.24.self_attn.v_proj - model.layers.25.self_attn.v_proj - model.layers.26.self_attn.v_proj - model.layers.27.self_attn.v_proj - model.layers.28.self_attn.v_proj - model.layers.29.self_attn.v_proj - model.layers.30.self_attn.v_proj - model.layers.31.self_attn.v_proj - model.layers.32.self_attn.v_proj - model.layers.33.self_attn.v_proj - model.layers.34.self_attn.v_proj - model.layers.35.self_attn.v_proj - model.layers.36.self_attn.v_proj - model.layers.37.self_attn.v_proj - model.layers.38.self_attn.v_proj - model.layers.39.self_attn.v_proj - model.layers.41.self_attn.v_proj - model.layers.42.self_attn.v_proj - model.layers.48.self_attn.v_proj - model.layers.53.self_attn.v_proj - model.layers.57.self_attn.v_proj - model.layers.58.self_attn.v_proj - model.layers.59.self_attn.v_proj - model.layers.61.self_attn.v_proj - model.layers.63.self_attn.v_proj - model.layers.64.self_attn.v_proj - model.layers.65.self_attn.v_proj - model.layers.66.self_attn.v_proj - model.layers.69.self_attn.v_proj - model.layers.74.self_attn.v_proj - model.layers.75.self_attn.v_proj - model.layers.76.self_attn.v_proj - model.layers.72.self_attn.v_proj chat_template: chatml dataset_prepared_path: qwen2-72b-data val_set_size: 0.01 output_dir: qwen2-72b sequence_len: 8192 # supports up to 8192 sample_packing: true pad_to_sequence_len: true # adapter: lora # lora_model_dir: # lora_r: 32 # lora_alpha: 16 # lora_dropout: 0.05 # lora_target_linear: true # lora_fan_in_fan_out: wandb_project: qwen2-72b wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 8 micro_batch_size: 1 num_epochs: 3 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 1e-5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 2 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 4 save_total_limit: 2 debug: deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16_cpuoffload_params.json weight_decay: 0.05 fsdp: fsdp_config: special_tokens: pad_token: "<|endoftext|>" eos_token: "<|im_end|>" ```
pszemraj/e5-small-LinkedCringe-setfit-skl-20it-2e
pszemraj
2024-06-21T21:15:22Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "setfit", "text-classification", "LinkedCringe", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-05-30T03:11:29Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification - LinkedCringe pipeline_tag: text-classification thumbnail: https://i.ibb.co/SPVBJrz/model-card.jpg --- # LinkedCringe v0.2: e5-small > fine-tuned on LinkedCringe v0.2 from [intfloat/e5-small](https://huggingface.co/intfloat/e5-small) <a href="https://ibb.co/VMJPTwK"><img src="https://i.ibb.co/XFjvtYw/carbon.png" alt="carbon" border="0"></a> <!-- alternate --> <!-- <a href="https://ibb.co/hR49z8Q"><img src="https://i.ibb.co/991g5YK/image.png" alt="image" border="0"></a> --> <a href="https://colab.research.google.com/gist/pszemraj/0b0c2663aa38f3b5f2d923010cfda5a8/scratchpad.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> This is an initial test/work-in-progress, but not bad thus far. ## Model This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ### Labels This model has been trained (_using methods described above_) to predict a single class label for `<text>' from the following: ``` # numeric id: text label { 1: 'cringe', 2: 'relevant', 3: 'info', 4: 'noise' } ``` --- ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` ### basic inference You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("pszemraj/e5-small-LinkedCringe-setfit-skl-20it-2e") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) # manually refer to labels above preds ``` ### Class object with utils create a"custom" wrapper class with the labels: ```python from setfit import SetFitModel from typing import List, Dict class PostClassifier: DEFAULT_ID2LABEL = {1: "cringe", 2: "relevant", 3: "info", 4: "noise"} def __init__( self, model_id: str = "pszemraj/e5-small-LinkedCringe-setfit-skl-20it-2e", id2label: Dict[int, str] = None, ): """Initialize PostClassifier with model name and/or label mapping.""" self.model = SetFitModel.from_pretrained(model_id) self.id2label = id2label if id2label else self.DEFAULT_ID2LABEL def classify(self, texts: List[str]) -> List[str]: """Classify list of texts, return list of corresponding labels.""" preds = self.model(texts) return [self.id2label[int(pred)] for pred in preds] def predict_proba(self, texts: List[str]) -> List[Dict[str, float]]: """Predict label probabilities for a list of texts, return a list of probability dictionaries.""" proba = self.model.predict_proba(texts) return [ {self.id2label.get(i + 1, "Unknown"): float(pred) for i, pred in enumerate(pred)} for pred in proba ] def __call__(self, texts: List[str]) -> List[str]: """Enable class instance to act as a function for text classification.""" return self.classify(texts) ``` instantiate & classify : ```python # import PostClassifier if you defined it in another script etc model_name="pszemraj/e5-small-LinkedCringe-setfit-skl-20it-2e" classifier = PostClassifier(model_name) # classify some posts (these should all be cringe maaaaybe noise) posts = [ "🚀 Innovation is our middle name! We're taking synergy to new heights and disrupting the market with our game-changing solutions. Stay tuned for the next paradigm shift! 💥 #CorporateRevolution #SynergisticSolutions", "🌟 Attention all trailblazers! Our cutting-edge product is the epitome of excellence. It's time to elevate your success and ride the wave of unparalleled achievements. Join us on this journey towards greatness! 🚀 #UnleashYourPotential #SuccessRevolution", "🌍 We're not just a company, we're a global force for change! Our world-class team is committed to revolutionizing industries and making a lasting impact. Together, let's reshape the future and leave a legacy that will be remembered for ages! 💪 #GlobalTrailblazers #LegacyMakers", "🔥 Harness the power of synergy and unlock your true potential with our transformative solutions. Together, we'll ignite a fire of success that will radiate across industries. Join the league of winners and conquer new frontiers! 🚀 #SynergyChampions #UnleashThePowerWithin", "💡 Innovation alert! Our visionary team has cracked the code to redefine excellence. Get ready to be blown away by our mind-boggling breakthroughs that will leave your competitors in the dust. It's time to disrupt the status quo and embrace the future! 🌟 #InnovationRevolution #ExcellenceUnleashed", "🌐 Welcome to the era of limitless possibilities! Our revolutionary platform will empower you to transcend boundaries and achieve unprecedented success. Together, let's shape a future where dreams become realities and ordinary becomes extraordinary! ✨ #LimitlessSuccess #DreamBig", "💥 Brace yourselves for a seismic shift in the industry! Our game-changing product is set to revolutionize the way you work, think, and succeed. Say goodbye to mediocrity and join the league of pioneers leading the charge towards a brighter tomorrow! 🚀 #IndustryDisruptors #PioneeringSuccess", "🚀 Attention all innovators and disruptors! It's time to break free from the chains of convention and rewrite the rulebook of success. Join us on this exhilarating journey as we create a new chapter in the annals of greatness. The sky's not the limit—it's just the beginning! 💫 #BreakingBarriers #UnleashGreatness", "🌟 Unlock the secret to unprecedented achievements with our exclusive formula for success. Our team of experts has distilled years of wisdom into a powerful elixir that will propel you to the zenith of greatness. It's time to embrace the extraordinary and become a legend in your own right! 💥 #FormulaForSuccess #RiseToGreatness", "🔑 Step into the realm of infinite possibilities and seize the keys to your success. Our groundbreaking solutions will unlock doors you never knew existed, propelling you towards a future filled with limitless growth and prosperity. Dare to dream big and let us be your catalyst for greatness! 🚀 #UnlockYourPotential #LimitlessSuccess" ] post_preds = classifier(posts) print(post_preds) ``` ## eval - detailed ``` ***** Running evaluation ***** {'accuracy': 0.8, 'based_model_id': 'intfloat/e5-small', 'tuned_model_id': 'e5-small-LinkedCringe-setfit-skl-20it-2e'} # 10-post results ['cringe', 'cringe', 'info', 'cringe', 'cringe', 'cringe', 'cringe', 'cringe', 'cringe', 'cringe'] ``` --- ## BibTeX entry and citation info > Note: this is for `setfit` and not this checkpoint. ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
bartowski/llama3-turbcat-instruct-8b-GGUF
bartowski
2024-06-21T21:11:27Z
342
5
null
[ "gguf", "text-generation", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-06-21T20:21:03Z
--- license: llama3 quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp imatrix Quantizations of llama3-turbcat-instruct-8b Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3197">b3197</a> for quantization. Original model: https://huggingface.co/turboderp/llama3-turbcat-instruct-8b All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) ## Prompt format ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [llama3-turbcat-instruct-8b-Q8_0_L.gguf](https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF/blob/main/llama3-turbcat-instruct-8b-Q8_1.gguf) | Q8_0_L | 9.52GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Extremely high quality, generally unneeded but max available quant. | | [llama3-turbcat-instruct-8b-Q8_0.gguf](https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF/blob/main/llama3-turbcat-instruct-8b-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. | | [llama3-turbcat-instruct-8b-Q6_K_L.gguf](https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF/blob/main/llama3-turbcat-instruct-8b-Q6_K_L.gguf) | Q6_K_L | 7.83GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Very high quality, near perfect, *recommended*. | | [llama3-turbcat-instruct-8b-Q6_K.gguf](https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF/blob/main/llama3-turbcat-instruct-8b-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. | | [llama3-turbcat-instruct-8b-Q5_K_L.gguf](https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF/blob/main/llama3-turbcat-instruct-8b-Q5_K_L.gguf) | Q5_K_L | 7.04GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. High quality, *recommended*. | | [llama3-turbcat-instruct-8b-Q5_K_M.gguf](https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF/blob/main/llama3-turbcat-instruct-8b-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. | | [llama3-turbcat-instruct-8b-Q5_K_S.gguf](https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF/blob/main/llama3-turbcat-instruct-8b-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. | | [llama3-turbcat-instruct-8b-Q4_K_L.gguf](https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF/blob/main/llama3-turbcat-instruct-8b-Q4_K_L.gguf) | Q4_K_L | 6.29GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Good quality, uses about 4.83 bits per weight, *recommended*. | | [llama3-turbcat-instruct-8b-Q4_K_M.gguf](https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF/blob/main/llama3-turbcat-instruct-8b-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [llama3-turbcat-instruct-8b-Q4_K_S.gguf](https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF/blob/main/llama3-turbcat-instruct-8b-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. | | [llama3-turbcat-instruct-8b-IQ4_XS.gguf](https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF/blob/main/llama3-turbcat-instruct-8b-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [llama3-turbcat-instruct-8b-Q3_K_XL.gguf](https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF//main/llama3-turbcat-instruct-8b-Q3_K_XL.gguf) | Q3_K_XL | | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Lower quality but usable, good for low RAM availability. | | [llama3-turbcat-instruct-8b-Q3_K_L.gguf](https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF/blob/main/llama3-turbcat-instruct-8b-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. | | [llama3-turbcat-instruct-8b-Q3_K_M.gguf](https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF/blob/main/llama3-turbcat-instruct-8b-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. | | [llama3-turbcat-instruct-8b-IQ3_M.gguf](https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF/blob/main/llama3-turbcat-instruct-8b-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [llama3-turbcat-instruct-8b-Q3_K_S.gguf](https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF/blob/main/llama3-turbcat-instruct-8b-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. | | [llama3-turbcat-instruct-8b-IQ3_XS.gguf](https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF/blob/main/llama3-turbcat-instruct-8b-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [llama3-turbcat-instruct-8b-IQ3_XXS.gguf](https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF/blob/main/llama3-turbcat-instruct-8b-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [llama3-turbcat-instruct-8b-Q2_K.gguf](https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF/blob/main/llama3-turbcat-instruct-8b-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. | | [llama3-turbcat-instruct-8b-IQ2_M.gguf](https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF/blob/main/llama3-turbcat-instruct-8b-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [llama3-turbcat-instruct-8b-IQ2_S.gguf](https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF/blob/main/llama3-turbcat-instruct-8b-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. | | [llama3-turbcat-instruct-8b-IQ2_XS.gguf](https://huggingface.co/bartowski/llama3-turbcat-instruct-8b-GGUF/blob/main/llama3-turbcat-instruct-8b-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/llama3-turbcat-instruct-8b-GGUF --include "llama3-turbcat-instruct-8b-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/llama3-turbcat-instruct-8b-GGUF --include "llama3-turbcat-instruct-8b-Q8_0.gguf/*" --local-dir llama3-turbcat-instruct-8b-Q8_0 ``` You can either specify a new local-dir (llama3-turbcat-instruct-8b-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
ReadyON/karakuri-lm-8x7b-instruct-v0.1-gguf
ReadyON
2024-06-21T21:02:25Z
21
3
null
[ "gguf", "text-generation-inference", "mixtral", "text-generation", "en", "ja", "dataset:TFMC/imatrix-dataset-for-japanese-llm", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2024-06-20T17:51:43Z
--- license: apache-2.0 datasets: - TFMC/imatrix-dataset-for-japanese-llm language: - en - ja pipeline_tag: text-generation tags: - text-generation-inference - mixtral --- # KARAKURI LM 8x7B Instruct v0.1 GGUF - GGUF version of [KARAKURI LM 8x7B Instruct v0.1](https://huggingface.co/karakuri-ai/karakuri-lm-8x7b-instruct-v0.1) - Debeloped by: [KARAKURI Inc.](https://about.karakuri.ai/) - Languages: Primarily English and Japanese - License: Apache 2.0 - Finetuned from model: [tokyotech-llm/Swallow-MX-8x7b-NVE-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MX-8x7b-NVE-v0.1)
tomaarsen/mpnet-base-gooaq-hard-negatives
tomaarsen
2024-06-21T20:52:23Z
14
0
sentence-transformers
[ "sentence-transformers", "safetensors", "mpnet", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:2286783", "loss:MultipleNegativesRankingLoss", "en", "dataset:tomaarsen/gooaq-hard-negatives", "dataset:sentence-transformers/gooaq", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:microsoft/mpnet-base", "base_model:finetune:microsoft/mpnet-base", "license:apache-2.0", "model-index", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-21T20:51:56Z
--- base_model: microsoft/mpnet-base datasets: - tomaarsen/gooaq-hard-negatives - sentence-transformers/gooaq language: - en library_name: sentence-transformers license: apache-2.0 metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 - dot_accuracy@1 - dot_accuracy@3 - dot_accuracy@5 - dot_accuracy@10 - dot_precision@1 - dot_precision@3 - dot_precision@5 - dot_precision@10 - dot_recall@1 - dot_recall@3 - dot_recall@5 - dot_recall@10 - dot_ndcg@10 - dot_mrr@10 - dot_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:2286783 - loss:MultipleNegativesRankingLoss widget: - source_sentence: how to download a youtube video onto usb? sentences: - Copy YouTube URL to Download Go to YouTube video you want to download to USB and copy its URL. Paste the link to download YouTube. Choose a necessary video or audio format and quality. - Before surgeons are qualified to operate, they must meet a set of challenging education requirements. These generally include four years of undergraduate study, four years of medical school leading to a Doctor of Medicine (M.D.) degree, and three to eight years of surgical residency at a hospital. - A Roman numeral representing the number eighteen (18). - source_sentence: what is the best diet for a leaky gut? sentences: - When a woman is pregnant, she does not continue to ovulate and will not have a period. Menstruation only occurs when a person is not pregnant. Although it is possible for women to experience some bleeding during pregnancy, this will not be due to their menstrual cycle. - To combat leaky gut, eat foods that promote the growth of healthy gut bacteria, including fruits, cultured dairy products, healthy fats, lean meats, and fibrous and fermented vegetables. - Popcorn Ceiling vs Asbestos Popcorn Ceiling Removal Cost CostHelper says Popcorn ceilings not containing asbestos can expect to pay about $1 to $3 per square foot or $250 to $900 to remove a popcorn ceiling from a 15'x20' room or $1,200 to $1,400 for a 1,6000 sq. - source_sentence: what is the difference between joint tenancy and common tenancy? sentences: - You (TV series) You is an American psychological thriller television series developed by Greg Berlanti and Sera Gamble. ... In December 2018, it was announced that the series would move to Netflix as a Netflix Original title. The second season was released exclusively on Netflix on December 26, 2019. - A normal resting heart rate range is between 60 and 100 bpm. - Joint tenancy also differs from tenancy in common because when one joint tenant dies, the other remaining joint tenants inherit the deceased tenant's interest in the property. However, a joint tenancy does allow owners to sell their interests. If one owner sells, the tenancy is converted to a tenancy in common. - source_sentence: what is the cause of blood clots in urine? sentences: - If sufficient blood is present in the urine, the blood may form a clot. The clot can completely block the flow of urine, causing sudden extreme pain and inability to urinate. Bleeding severe enough to cause such a clot is usually caused by an injury to the urinary tract. - Distance is the magnitude (length) of the displacement vector. Path length is how far the object moved as it traveled from its initial position to its final position. - In fact, the brand is consistently ranked near the top of automakers in terms of the most expensive cars to maintain. The total maintenance costs of the average Audi over a 10-year span is $12,400. ... All cars are different, and many require more maintenance than some depending on their age and driving history. - source_sentence: are hard seltzers malt liquor? sentences: - The BCD method measures the distance from the apex of the breast down to the wire line directly below it. That measurement in inches will determine your cup and frame size. Then take your Rib Cage measurement directly under your bra. ... For example, the BCD might be 4.0 and the Rib Cage of 32. - Seltzer is carbonated water. “Hard seltzer” is a flavored malt beverage — essentially the same as a Lime-A-Rita or a Colt 45 or a Smirnoff Ice. These products derive their alcohol from fermented malted grains and are then carbonated, flavored and sweetened. - Bleaching action of chlorine is based on oxidation while that of sulphur is based on reduction. Chlorine acts with water to produce nascent oxygen. ... Sulphour dioxide removes oxygen from the coloured substance and makes it colourless. co2_eq_emissions: emissions: 1550.677005890232 energy_consumed: 3.989372336366245 source: codecarbon training_type: fine-tuning on_cloud: false cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K ram_total_size: 31.777088165283203 hours_used: 11.599 hardware_used: 1 x NVIDIA GeForce RTX 3090 model-index: - name: MPNet base trained on GooAQ triplets with hard negatives results: - task: type: information-retrieval name: Information Retrieval dataset: name: gooaq dev type: gooaq-dev metrics: - type: cosine_accuracy@1 value: 0.7413 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.8697 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9055 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9427 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.7413 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2899 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.1811 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09427000000000002 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.7413 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.8697 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.9055 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9427 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.8441925656083314 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.8123759920634883 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.8147743017171518 name: Cosine Map@100 - type: dot_accuracy@1 value: 0.7384 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.8669 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.9039 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 0.9389 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.7384 name: Dot Precision@1 - type: dot_precision@3 value: 0.28896666666666665 name: Dot Precision@3 - type: dot_precision@5 value: 0.18078000000000002 name: Dot Precision@5 - type: dot_precision@10 value: 0.09389000000000002 name: Dot Precision@10 - type: dot_recall@1 value: 0.7384 name: Dot Recall@1 - type: dot_recall@3 value: 0.8669 name: Dot Recall@3 - type: dot_recall@5 value: 0.9039 name: Dot Recall@5 - type: dot_recall@10 value: 0.9389 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.8410831459293242 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.8094504365079324 name: Dot Mrr@10 - type: dot_map@100 value: 0.8120497186357559 name: Dot Map@100 --- # MPNet base trained on GooAQ triplets with hard negatives This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) on the [train](https://huggingface.co/datasets/tomaarsen/gooaq-hard-negatives) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) <!-- at revision 6996ce1e91bd2a9c7d7f61daec37463394f73f09 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [train](https://huggingface.co/datasets/tomaarsen/gooaq-hard-negatives) - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("tomaarsen/mpnet-base-gooaq-hard-negatives") # Run inference sentences = [ 'are hard seltzers malt liquor?', 'Seltzer is carbonated water. “Hard seltzer” is a flavored malt beverage — essentially the same as a Lime-A-Rita or a Colt 45 or a Smirnoff Ice. These products derive their alcohol from fermented malted grains and are then carbonated, flavored and sweetened.', 'Bleaching action of chlorine is based on oxidation while that of sulphur is based on reduction. Chlorine acts with water to produce nascent oxygen. ... Sulphour dioxide removes oxygen from the coloured substance and makes it colourless.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Dataset: `gooaq-dev` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.7413 | | cosine_accuracy@3 | 0.8697 | | cosine_accuracy@5 | 0.9055 | | cosine_accuracy@10 | 0.9427 | | cosine_precision@1 | 0.7413 | | cosine_precision@3 | 0.2899 | | cosine_precision@5 | 0.1811 | | cosine_precision@10 | 0.0943 | | cosine_recall@1 | 0.7413 | | cosine_recall@3 | 0.8697 | | cosine_recall@5 | 0.9055 | | cosine_recall@10 | 0.9427 | | cosine_ndcg@10 | 0.8442 | | cosine_mrr@10 | 0.8124 | | **cosine_map@100** | **0.8148** | | dot_accuracy@1 | 0.7384 | | dot_accuracy@3 | 0.8669 | | dot_accuracy@5 | 0.9039 | | dot_accuracy@10 | 0.9389 | | dot_precision@1 | 0.7384 | | dot_precision@3 | 0.289 | | dot_precision@5 | 0.1808 | | dot_precision@10 | 0.0939 | | dot_recall@1 | 0.7384 | | dot_recall@3 | 0.8669 | | dot_recall@5 | 0.9039 | | dot_recall@10 | 0.9389 | | dot_ndcg@10 | 0.8411 | | dot_mrr@10 | 0.8095 | | dot_map@100 | 0.812 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### train * Dataset: [train](https://huggingface.co/datasets/tomaarsen/gooaq-hard-negatives) at [87594a1](https://huggingface.co/datasets/tomaarsen/gooaq-hard-negatives/tree/87594a1e6c58e88b5843afa9da3a97ffd75d01c2) * Size: 2,286,783 training samples * Columns: <code>question</code>, <code>answer</code>, <code>negative_1</code>, <code>negative_2</code>, <code>negative_3</code>, <code>negative_4</code>, and <code>negative_5</code> * Approximate statistics based on the first 1000 samples: | | question | answer | negative_1 | negative_2 | negative_3 | negative_4 | negative_5 | |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | string | string | string | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 11.84 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 59.41 tokens</li><li>max: 158 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 59.09 tokens</li><li>max: 139 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 58.61 tokens</li><li>max: 139 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 58.98 tokens</li><li>max: 173 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 59.43 tokens</li><li>max: 137 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 60.03 tokens</li><li>max: 146 tokens</li></ul> | * Samples: | question | answer | negative_1 | negative_2 | negative_3 | negative_4 | negative_5 | |:---------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>is toprol xl the same as metoprolol?</code> | <code>Metoprolol succinate is also known by the brand name Toprol XL. It is the extended-release form of metoprolol. Metoprolol succinate is approved to treat high blood pressure, chronic chest pain, and congestive heart failure.</code> | <code>Secondly, metoprolol and metoprolol ER have different brand-name equivalents: Brand version of metoprolol: Lopressor. Brand version of metoprolol ER: Toprol XL.</code> | <code>Pill with imprint 1 is White, Round and has been identified as Metoprolol Tartrate 25 mg.</code> | <code>Interactions between your drugs No interactions were found between Allergy Relief and metoprolol. This does not necessarily mean no interactions exist. Always consult your healthcare provider.</code> | <code>Metoprolol is a type of medication called a beta blocker. It works by relaxing blood vessels and slowing heart rate, which improves blood flow and lowers blood pressure. Metoprolol can also improve the likelihood of survival after a heart attack.</code> | <code>Metoprolol starts to work after about 2 hours, but it can take up to 1 week to fully take effect. You may not feel any different when you take metoprolol, but this doesn't mean it's not working. It's important to keep taking your medicine.</code> | | <code>are you experienced cd steve hoffman?</code> | <code>The Are You Experienced album was apparently mastered from the original stereo UK master tapes (according to Steve Hoffman - one of the very few who has heard both the master tapes and the CDs produced over the years). ... The CD booklets were a little sparse, but at least they stayed true to the album's original design.</code> | <code>I Saw the Light. Showcasing the unique talent and musical influence of country-western artist Hank Williams, this candid biography also sheds light on the legacy of drug abuse and tormented relationships that contributes to the singer's legend.</code> | <code>(Read our ranking of his top 10.) And while Howard dresses the part of director, any notion of him as a tortured auteur or dictatorial taskmasker — the clichés of the Hollywood director — are tossed aside. He's very nice.</code> | <code>He was a music star too. Where're you people born and brought up? We 're born and brought up here in Anambra State at Nkpor town, near Onitsha.</code> | <code>At the age of 87 he has now retired from his live shows and all the traveling involved. And although he still picks up his Martin Guitar and does a show now and then, his life is now devoted to writing his memoirs.</code> | <code>The owner of the mysterious voice behind all these videos is a man who's seen a lot, visiting a total of 56 intimate celebrity spaces over the course of five years. His name is Joe Sabia — that's him in the photo — and he's currently the VP of creative development at Condé Nast Entertainment.</code> | | <code>how are babushka dolls made?</code> | <code>Matryoshka dolls are made of wood from lime, balsa, alder, aspen, and birch trees; lime is probably the most common wood type. ... After cutting, the trees are stripped of most of their bark, although a few inner rings of bark are left to bind the wood and keep it from splitting.</code> | <code>A quick scan of the auction and buy-it-now listings on eBay finds porcelain doll values ranging from around $5 and $10 to several thousand dollars or more but no dolls listed above $10,000.</code> | <code>Japanese dolls are called as ningyō in Japanese and literally translates to 'human form'.</code> | <code>Matyoo: All Fresno Girl dolls come just as real children are born.</code> | <code>As of September 2016, there are over 100 characters. The main toy line includes 13-inch Dolls, the mini-series, and a variety of mini play-sets and plush dolls as well as Lalaloopsy Littles, smaller siblings of the 13-inch dolls. A spin-off known as "Lala-Oopsies" came out in late 2012.</code> | <code>LOL dolls are little baby dolls that come wrapped inside a surprise toy ball. Each ball has layers that contain stickers, secret messages, mix and match accessories–and finally–a doll. ... The doll on the ball is almost never the doll inside. Dolls are released in series, so not every doll is available all the time.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### sentence-transformers/gooaq * Dataset: [sentence-transformers/gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq) at [b089f72](https://huggingface.co/datasets/sentence-transformers/gooaq/tree/b089f728748a068b7bc5234e5bcf5b25e3c8279c) * Size: 10,000 evaluation samples * Columns: <code>question</code> and <code>answer</code> * Approximate statistics based on the first 1000 samples: | | question | answer | |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 11.89 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 59.65 tokens</li><li>max: 131 tokens</li></ul> | * Samples: | question | answer | |:-------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>how to transfer data from ipad to usb?</code> | <code>First, in “Locations,” tap the “On My iPhone” or “On My iPad” section. Here, tap and hold the empty space, and then select “New Folder.” Name it, and then tap “Done” to create a new folder for the files you want to transfer. Now, from the “Locations” section, select your USB flash drive.</code> | | <code>what quorn products are syn free?</code> | <code>['bacon style pieces.', 'bacon style rashers, chilled.', 'BBQ sliced fillets.', 'beef style and red onion burgers.', 'pieces.', 'chicken style slices.', 'fajita strips.', 'family roast.']</code> | | <code>what is the difference between turmeric ginger?</code> | <code>Ginger offers a sweet and spicy zing to dishes. Turmeric provides a golden yellow colour and a warm and bitter taste with a peppery aroma.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `learning_rate`: 2e-05 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `bf16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | loss | gooaq-dev_cosine_map@100 | |:------:|:-----:|:-------------:|:------:|:------------------------:| | 0 | 0 | - | - | 0.1405 | | 0.2869 | 20500 | 0.5303 | - | - | | 0.2939 | 21000 | 0.5328 | - | - | | 0.3009 | 21500 | 0.515 | - | - | | 0.3079 | 22000 | 0.5264 | 0.0297 | 0.7919 | | 0.3149 | 22500 | 0.5189 | - | - | | 0.3218 | 23000 | 0.5284 | - | - | | 0.3288 | 23500 | 0.5308 | - | - | | 0.3358 | 24000 | 0.509 | 0.0281 | 0.7932 | | 0.3428 | 24500 | 0.5074 | - | - | | 0.3498 | 25000 | 0.5196 | - | - | | 0.3568 | 25500 | 0.5041 | - | - | | 0.3638 | 26000 | 0.4976 | 0.0291 | 0.7950 | | 0.3708 | 26500 | 0.5025 | - | - | | 0.3778 | 27000 | 0.5175 | - | - | | 0.3848 | 27500 | 0.4921 | - | - | | 0.3918 | 28000 | 0.4924 | 0.0298 | 0.7938 | | 0.3988 | 28500 | 0.49 | - | - | | 0.4058 | 29000 | 0.4924 | - | - | | 0.4128 | 29500 | 0.4902 | - | - | | 0.4198 | 30000 | 0.4846 | 0.0269 | 0.7966 | | 0.4268 | 30500 | 0.4815 | - | - | | 0.4338 | 31000 | 0.4881 | - | - | | 0.4408 | 31500 | 0.4848 | - | - | | 0.4478 | 32000 | 0.4882 | 0.0264 | 0.8004 | | 0.4548 | 32500 | 0.4809 | - | - | | 0.4618 | 33000 | 0.4896 | - | - | | 0.4688 | 33500 | 0.4744 | - | - | | 0.4758 | 34000 | 0.4827 | 0.0252 | 0.8038 | | 0.4828 | 34500 | 0.4703 | - | - | | 0.4898 | 35000 | 0.4765 | - | - | | 0.4968 | 35500 | 0.4625 | - | - | | 0.5038 | 36000 | 0.4698 | 0.0269 | 0.8025 | | 0.5108 | 36500 | 0.4666 | - | - | | 0.5178 | 37000 | 0.4594 | - | - | | 0.5248 | 37500 | 0.4621 | - | - | | 0.5318 | 38000 | 0.4538 | 0.0266 | 0.8047 | | 0.5387 | 38500 | 0.4576 | - | - | | 0.5457 | 39000 | 0.4594 | - | - | | 0.5527 | 39500 | 0.4503 | - | - | | 0.5597 | 40000 | 0.4538 | 0.0265 | 0.8038 | | 0.5667 | 40500 | 0.4521 | - | - | | 0.5737 | 41000 | 0.4575 | - | - | | 0.5807 | 41500 | 0.4544 | - | - | | 0.5877 | 42000 | 0.4462 | 0.0245 | 0.8077 | | 0.5947 | 42500 | 0.4491 | - | - | | 0.6017 | 43000 | 0.4651 | - | - | | 0.6087 | 43500 | 0.4549 | - | - | | 0.6157 | 44000 | 0.4461 | 0.0262 | 0.8046 | | 0.6227 | 44500 | 0.4571 | - | - | | 0.6297 | 45000 | 0.4478 | - | - | | 0.6367 | 45500 | 0.4482 | - | - | | 0.6437 | 46000 | 0.4439 | 0.0244 | 0.8070 | | 0.6507 | 46500 | 0.4384 | - | - | | 0.6577 | 47000 | 0.446 | - | - | | 0.6647 | 47500 | 0.4425 | - | - | | 0.6717 | 48000 | 0.4308 | 0.0248 | 0.8067 | | 0.6787 | 48500 | 0.4374 | - | - | | 0.6857 | 49000 | 0.4342 | - | - | | 0.6927 | 49500 | 0.4455 | - | - | | 0.6997 | 50000 | 0.4322 | 0.0242 | 0.8077 | | 0.7067 | 50500 | 0.4288 | - | - | | 0.7137 | 51000 | 0.4317 | - | - | | 0.7207 | 51500 | 0.4295 | - | - | | 0.7277 | 52000 | 0.4291 | 0.0231 | 0.8130 | | 0.7347 | 52500 | 0.4279 | - | - | | 0.7417 | 53000 | 0.4287 | - | - | | 0.7486 | 53500 | 0.4252 | - | - | | 0.7556 | 54000 | 0.4341 | 0.0243 | 0.8112 | | 0.7626 | 54500 | 0.419 | - | - | | 0.7696 | 55000 | 0.4323 | - | - | | 0.7766 | 55500 | 0.4252 | - | - | | 0.7836 | 56000 | 0.4313 | 0.0264 | 0.8107 | | 0.7906 | 56500 | 0.4222 | - | - | | 0.7976 | 57000 | 0.4226 | - | - | | 0.8046 | 57500 | 0.4152 | - | - | | 0.8116 | 58000 | 0.4222 | 0.0236 | 0.8131 | | 0.8186 | 58500 | 0.4184 | - | - | | 0.8256 | 59000 | 0.4144 | - | - | | 0.8326 | 59500 | 0.4242 | - | - | | 0.8396 | 60000 | 0.4148 | 0.0242 | 0.8125 | | 0.8466 | 60500 | 0.4222 | - | - | | 0.8536 | 61000 | 0.4184 | - | - | | 0.8606 | 61500 | 0.4138 | - | - | | 0.8676 | 62000 | 0.4119 | 0.0240 | 0.8133 | | 0.8746 | 62500 | 0.411 | - | - | | 0.8816 | 63000 | 0.4172 | - | - | | 0.8886 | 63500 | 0.4145 | - | - | | 0.8956 | 64000 | 0.4168 | 0.0240 | 0.8137 | | 0.9026 | 64500 | 0.4071 | - | - | | 0.9096 | 65000 | 0.4119 | - | - | | 0.9166 | 65500 | 0.403 | - | - | | 0.9236 | 66000 | 0.4092 | 0.0238 | 0.8141 | | 0.9306 | 66500 | 0.4079 | - | - | | 0.9376 | 67000 | 0.4129 | - | - | | 0.9446 | 67500 | 0.4082 | - | - | | 0.9516 | 68000 | 0.4054 | 0.0235 | 0.8149 | | 0.9586 | 68500 | 0.4129 | - | - | | 0.9655 | 69000 | 0.4085 | - | - | | 0.9725 | 69500 | 0.414 | - | - | | 0.9795 | 70000 | 0.4075 | 0.0239 | 0.8142 | | 0.9865 | 70500 | 0.4104 | - | - | | 0.9935 | 71000 | 0.4087 | - | - | | 1.0 | 71462 | - | - | 0.8148 | </details> ### Environmental Impact Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon). - **Energy Consumed**: 3.989 kWh - **Carbon Emitted**: 1.551 kg of CO2 - **Hours Used**: 11.599 hours ### Training Hardware - **On Cloud**: No - **GPU Model**: 1 x NVIDIA GeForce RTX 3090 - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K - **RAM Size**: 31.78 GB ### Framework Versions - Python: 3.11.6 - Sentence Transformers: 3.1.0.dev0 - Transformers: 4.41.2 - PyTorch: 2.3.0+cu121 - Accelerate: 0.31.0 - Datasets: 2.20.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
alielfilali01/Q2AW1M-1111
alielfilali01
2024-06-21T20:45:17Z
2,904
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T15:21:35Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
alielfilali01/Q2AW1M-1000
alielfilali01
2024-06-21T20:43:15Z
2,909
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T13:02:19Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
alielfilali01/Q2AW1M-0010
alielfilali01
2024-06-21T20:41:08Z
2,911
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T13:59:48Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
powermove72/Shark-1-Ogno-7b-passthrough
powermove72
2024-06-21T20:31:23Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "powermove72/Shark-1", "eren23/OGNO-7b-dpo-truthful", "conversational", "custom_code", "base_model:eren23/OGNO-7b-dpo-truthful", "base_model:merge:eren23/OGNO-7b-dpo-truthful", "base_model:powermove72/Shark-1", "base_model:merge:powermove72/Shark-1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T20:26:54Z
--- base_model: - powermove72/Shark-1 - eren23/OGNO-7b-dpo-truthful tags: - merge - mergekit - lazymergekit - powermove72/Shark-1 - eren23/OGNO-7b-dpo-truthful --- # Shark-1-Ogno-7b-passthrough Shark-1-Ogno-7b-passthrough is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [powermove72/Shark-1](https://huggingface.co/powermove72/Shark-1) * [eren23/OGNO-7b-dpo-truthful](https://huggingface.co/eren23/OGNO-7b-dpo-truthful) ## 🧩 Configuration ```yaml slices: - sources: - model: powermove72/Shark-1 layer_range: [0, 16] - sources: - model: eren23/OGNO-7b-dpo-truthful layer_range: [16, 32] merge_method: passthrough tokenizer_source: union dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "powermove72/Shark-1-Ogno-7b-passthrough" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Rupesh2/OrpoLlama-3-8B-uncensored
Rupesh2
2024-06-21T20:22:31Z
31
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T20:17:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ahmedesmail16/0.50-Train-Test-vit-large
ahmedesmail16
2024-06-21T20:17:06Z
14
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-large-patch16-224", "base_model:finetune:google/vit-large-patch16-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-06-21T18:29:42Z
--- license: apache-2.0 base_model: google/vit-large-patch16-224 tags: - generated_from_trainer metrics: - accuracy model-index: - name: 0.50-Train-Test-vit-large results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.50-Train-Test-vit-large This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8804 - Accuracy: 0.8098 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 2.3722 | 0.9825 | 14 | 1.8140 | 0.3758 | | 1.7117 | 1.9649 | 28 | 0.9446 | 0.7383 | | 0.3741 | 2.9474 | 42 | 0.8083 | 0.7338 | | 0.1709 | 4.0 | 57 | 0.7460 | 0.7562 | | 0.0166 | 4.9825 | 71 | 0.7632 | 0.7763 | | 0.0087 | 5.9649 | 85 | 0.9165 | 0.7629 | | 0.013 | 6.9474 | 99 | 0.8161 | 0.7942 | | 0.0029 | 8.0 | 114 | 0.8216 | 0.7964 | | 0.0016 | 8.9825 | 128 | 0.8461 | 0.7919 | | 0.0009 | 9.9649 | 142 | 0.8528 | 0.7919 | | 0.0007 | 10.9474 | 156 | 0.8539 | 0.8031 | | 0.0006 | 12.0 | 171 | 0.8586 | 0.8054 | | 0.0006 | 12.9825 | 185 | 0.8622 | 0.8076 | | 0.0005 | 13.9649 | 199 | 0.8649 | 0.8098 | | 0.0005 | 14.9474 | 213 | 0.8677 | 0.8098 | | 0.0005 | 16.0 | 228 | 0.8706 | 0.8098 | | 0.0004 | 16.9825 | 242 | 0.8729 | 0.8098 | | 0.0004 | 17.9649 | 256 | 0.8747 | 0.8098 | | 0.0004 | 18.9474 | 270 | 0.8764 | 0.8076 | | 0.0004 | 20.0 | 285 | 0.8776 | 0.8098 | | 0.0004 | 20.9825 | 299 | 0.8789 | 0.8076 | | 0.0003 | 21.9649 | 313 | 0.8794 | 0.8098 | | 0.0003 | 22.9474 | 327 | 0.8801 | 0.8098 | | 0.0003 | 24.0 | 342 | 0.8804 | 0.8098 | | 0.0003 | 24.5614 | 350 | 0.8804 | 0.8098 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
rafaeloc15/llama_question_gguf
rafaeloc15
2024-06-21T20:16:26Z
9
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-21T19:23:25Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** rafaeloc15 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
jfranklin-foundry/qwen-7b-flock-1719000759
jfranklin-foundry
2024-06-21T20:14:46Z
7
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T20:11:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
EleutherAI/Mistral-7B-v0.1-squaring-random-standardized-many-random-names
EleutherAI
2024-06-21T20:13:03Z
15
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T01:29:52Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mahdibaghbanzadeh/seqsight_4096_512_27M_host-filter
mahdibaghbanzadeh
2024-06-21T20:11:48Z
18
0
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-21T20:06:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AirellPramono/finetuned-fruit-classifier
AirellPramono
2024-06-21T20:08:18Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:ivandrian11/fruit-classifier", "base_model:finetune:ivandrian11/fruit-classifier", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-06-21T20:01:14Z
--- license: apache-2.0 base_model: ivandrian11/fruit-classifier tags: - generated_from_trainer metrics: - accuracy model-index: - name: finetuned-fruit-classifier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-fruit-classifier This model is a fine-tuned version of [ivandrian11/fruit-classifier](https://huggingface.co/ivandrian11/fruit-classifier) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1037 - Accuracy: 0.95 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0057 | 1.0 | 7 | 0.1005 | 0.95 | | 0.0041 | 2.0 | 14 | 0.1033 | 0.95 | | 0.0042 | 3.0 | 21 | 0.1037 | 0.95 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
Sobeih22/t5-finetuned-ar-to-arsl3
Sobeih22
2024-06-21T20:06:47Z
7
1
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:PRAli22/arat5-arabic-dialects-translation", "base_model:finetune:PRAli22/arat5-arabic-dialects-translation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-09T00:19:32Z
--- license: apache-2.0 base_model: PRAli22/arat5-arabic-dialects-translation tags: - generated_from_trainer metrics: - bleu model-index: - name: t5-finetuned-ar-to-arsl3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-finetuned-ar-to-arsl3 This model is a fine-tuned version of [PRAli22/arat5-arabic-dialects-translation](https://huggingface.co/PRAli22/arat5-arabic-dialects-translation) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3891 - Bleu: 0.9525 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 0.99 | 78 | 0.4271 | 0.9497 | | No log | 1.99 | 157 | 0.3818 | 0.9485 | | No log | 2.99 | 236 | 0.3428 | 0.9526 | | No log | 3.99 | 315 | 0.3173 | 0.9527 | | No log | 5.0 | 394 | 0.3344 | 0.9518 | | No log | 6.0 | 473 | 0.3455 | 0.9536 | | 0.0868 | 7.0 | 552 | 0.3598 | 0.9525 | | 0.0868 | 8.0 | 631 | 0.3672 | 0.9543 | | 0.0868 | 8.99 | 709 | 0.3574 | 0.9537 | | 0.0868 | 9.99 | 788 | 0.3664 | 0.9505 | | 0.0868 | 10.99 | 867 | 0.3750 | 0.9529 | | 0.0868 | 11.99 | 946 | 0.3805 | 0.9538 | | 0.054 | 13.0 | 1025 | 0.3885 | 0.9541 | | 0.054 | 14.0 | 1104 | 0.3874 | 0.9525 | | 0.054 | 14.83 | 1170 | 0.3891 | 0.9525 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
Stephanie-S/gpt2_small
Stephanie-S
2024-06-21T20:01:46Z
5
0
transformers
[ "transformers", "safetensors", "gpt2", "text-classification", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-06-20T16:49:39Z
--- license: mit base_model: openai-community/gpt2 tags: - generated_from_trainer metrics: - accuracy model-index: - name: gpt2_small results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2 This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1736 - Accuracy: 0.9328 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.297 | 1.0 | 625 | 0.1815 | 0.9316 | | 0.1907 | 2.0 | 1250 | 0.1736 | 0.9328 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
Intel/dpt-swinv2-large-384
Intel
2024-06-21T19:55:34Z
57
0
transformers
[ "transformers", "safetensors", "dpt", "depth-estimation", "vision", "arxiv:2103.13413", "arxiv:2307.14460", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
depth-estimation
2023-12-10T20:48:01Z
--- license: mit tags: - vision - depth-estimation model-index: - name: dpt-swinv2-large-384 results: - task: type: monocular-depth-estimation name: Monocular Depth Estimation dataset: type: MIX-6 name: MIX-6 metrics: - type: Zero-shot transfer value: 10.82 name: Zero-shot transfer config: Zero-shot transfer verified: false --- # Midas 3.1 DPT (Intel/dpt-swinv2-large-384 using Swinv2 backbone) DPT (Dense Prediction Transformer) model trained on 1.4 million images for monocular depth estimation. It was introduced in the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Ranftl et al. (2021) and first released in [this repository](https://github.com/isl-org/MiDaS/tree/master). **Disclaimer:** The team releasing DPT did not write a model card for this model so this model card has been written by Intel and the Hugging Face team. # Overview of Monocular depth estimation The aim of Monocular depth estimation is to infer detailed depth from a single image or camera view, finds applications in fields like generative AI, 3D reconstruction, and autonomous driving. However, deriving depth from individual pixels in a single image is challenging due to the under constrained nature of the problem. Recent advancements attribute progress to learning-based methods, particularly with MiDaS, leveraging dataset mixing and scale-and-shift-invariant loss. MiDaS has evolved with releases featuring more powerful backbones and lightweight variants for mobile use. With the rise of transformer architectures in computer vision, including those pioneered by models like ViT,and Swin, and SwinV2 there's been a shift towards using them for depth estimation. Inspired by this, MiDaS v3.1 incorporates promising transformer-based encoders alongside traditional convolutional ones, aiming for a comprehensive investigation of depth estimation techniques. The paper focuses on describing the integration of these backbones into MiDaS, providing a thorough comparison of different v3.1 models, and offering guidance on utilizing future backbones with MiDaS. Swin Transformer (the name Swin stands for Shifted window) is initially described in arxiv, which capably serves as a general-purpose backbone for computer vision. It is basically a hierarchical Transformer whose representation is computed with shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. Swin Transformer achieves strong performance on COCO object detection (58.7 box AP and 51.1 mask AP on test-dev) and ADE20K semantic segmentation (53.5 mIoU on val), surpassing previous models by a large margin. | Input Image | Output Depth Image | | --- | --- | | ![input image](https://cdn-uploads.huggingface.co/production/uploads/63dc702662dc193e6d460f1b/PDwRwuryaO3YtuyRjraiM.jpeg) | ![Depth image](https://cdn-uploads.huggingface.co/production/uploads/63dc702662dc193e6d460f1b/ugqri6LcqJBuU9zI9aeqN.jpeg) | # Videos [![MiDaS Depth Estimation - Intel Technology](https://img.youtube.com/vi/UjaeNNFf9sE/0.jpg)](https://www.youtube.com/watch?v=UjaeNNFf9sE) MiDaS Depth Estimation is a machine learning model from Intel Labs for monocular depth estimation. It was trained on up to 12 datasets and covers both in-and outdoor scenes. Multiple different MiDaS models are available, ranging from high quality depth estimation to lightweight models for mobile downstream tasks (https://github.com/isl-org/MiDaS). ## Model description This Midas 3.1 DPT model uses the [SwinV2 Philosophy]( https://huggingface.co/docs/transformers/en/model_doc/swinv2) model as backbone and uses a different approach to Vision that Beit, where Swin backbones focus more on using a hierarchical approach. ![model image]( https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/swin_transformer_architecture.png) The previous release MiDaS v3.0 solely leverages the vanilla vision transformer ViT, MiDaS v3.1 offers additional models based on BEiT, Swin, SwinV2, Next-ViT and LeViT. # Midas 3.1 DPT Model(Swin backbone) This model refers to Intel dpt-swinv2-large-384 based on the Swin backbone. The arxiv paper compares both Beit and Swin backbones. The highest quality depth estimation is achieved using the BEiT transformer. We provide variants such as Swin-L, SwinV2-L, SwinV2-B, SwinV2-T, where the numbers signify training resolutions of 512x512 and 384x384, while the letters denote large and base models respectively. DPT (Dense Prediction Transformer) model trained on 1.4 million images for monocular depth estimation. It was introduced in the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Ranftl et al. (2021) and first released in [this repository](https://github.com/isl-org/MiDaS/tree/master). This model card refers specifically to SwinV2, in the paper, and is referred to dpt-swinv2-large-384. A more recent paper from 2013, specifically discussing Swin and SwinV2, is in this paper [MiDaS v3.1 – A Model Zoo for Robust Monocular Relative Depth Estimation ](https://arxiv.org/pdf/2307.14460.pdf) The model card has been written in combination by the Hugging Face team and Intel. | Model Detail | Description | | ----------- | ----------- | | Model Authors - Company | Intel | | Date | March 18, 2024 | | Version | 1 | | Type | Computer Vision - Monocular Depth Estimation | | Paper or Other Resources | [MiDaS v3.1 – A Model Zoo for Robust Monocular Relative Depth Estimation](https://arxiv.org/pdf/2307.14460.pdf) and [GitHub Repo](https://github.com/isl-org/MiDaS/blob/master/README.md) | | License | MIT | | Questions or Comments | [Community Tab](https://huggingface.co/Intel/dpt-swinv2-large-384/discussions) and [Intel Developers Discord](https://discord.gg/rv2Gp55UJQ)| | Intended Use | Description | | ----------- | ----------- | | Primary intended uses | You can use the raw model for zero-shot monocular depth estimation. See the [model hub](https://huggingface.co/models?search=dpt-beit-large) to look for fine-tuned versions on a task that interests you. | | Primary intended users | Anyone doing monocular depth estimation | | Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.| ## How to use Be sure the to update PyTorch as Transformers as mismatches in versions can generate erros such as: "TypeError: unsupported operand type(s) for //: 'NoneType' and 'NoneType'". As tested by this contributor, the following versions ran correctly: ```python import torch import transformers print(torch.__version__) print(transformers.__version__) ``` ```bash out: '2.2.1+cpu' out: '4.37.2' ``` ### To Install: ```pythopn pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu ``` # To Use: Here is how to use this model for zero-shot depth estimation on an image: ```python output = prediction.squeeze().cpu().numpy() formatted = (output * 255 / np.max(output)).astype("uint8") depth = Image.fromarray(formatted) depth ``` or one can use the pipeline API: ```python from transformers import pipeline ```python pipe = pipeline(task="depth-estimation", model="Intel/dpt-swinv2-large-384") result = pipe("http://images.cocodataset.org/val2017/000000181816.jpg") result["depth"] ``` ## Quantitative Analyses | Model | Square Resolution HRWSI RMSE | Square Resolution Blended MVS REL | Square Resolution ReDWeb RMSE | | --- | --- | --- | --- | | BEiT 384-L | 0.068 | 0.070 | 0.076 | | Swin-L Training 1| 0.0708 | 0.0724 | 0.0826 | | Swin-L Training 2 | 0.0713 | 0.0720 | 0.0831 | | ViT-L | 0.071 | 0.072 | 0.082 | | --- | --- | --- | --- | | Next-ViT-L-1K-6M | 0.075 |0.073 | 0.085 | | DeiT3-L-22K-1K | 0.070 | 0.070 | 0.080 | | ViT-L-Hybrid | 0.075 | 0.075 | 0.085 | | DeiT3-L | 0.077 | 0.075 | 0.087 | | --- | --- | --- | --- | | ConvNeXt-XL | 0.075 | 0.075 | 0.085 | | ConvNeXt-L | 0.076 | 0.076 | 0.087 | | EfficientNet-L2| 0.165 | 0.277 | 0.219 | | --- | --- | --- | --- | | ViT-L Reversed | 0.071 | 0.073 | 0.081 | | Swin-L Equidistant | 0.072 | 0.074 | 0.083 | | --- | --- | --- | --- | # Ethical Considerations and Limitations dpt-swinv2-large-384 can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of dpt-swinv2-large-384, developers should perform safety testing. # Caveats and Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Here are a couple of useful links to learn more about Intel's AI software: - Intel Neural Compressor [link](https://github.com/intel/neural-compressor) - Intel Extension for Transformers [link](https://github.com/intel/intel-extension-for-transformers) # Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2103-13413, author = {Ren{\'{e}} Reiner Birkl, Diana Wofk, Matthias Muller}, title = {MiDaS v3.1 – A Model Zoo for Robust Monocular Relative Depth Estimation}, journal = {CoRR}, volume = {abs/2307.14460}, year = {2021}, url = {https://arxiv.org/abs/2307.14460}, eprinttype = {arXiv}, eprint = {2307.14460}, timestamp = {Wed, 26 Jul 2023}, biburl = {https://dblp.org/rec/journals/corr/abs-2307.14460.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
EleutherAI/Mistral-7B-v0.1-authors-random-standardized-many-random-names
EleutherAI
2024-06-21T19:52:18Z
10
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-07T04:13:51Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TechxGenus/Codestral-22B-v0.1-FP8
TechxGenus
2024-06-21T19:35:52Z
41
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "code", "license:other", "autotrain_compatible", "text-generation-inference", "fp8", "region:us" ]
text-generation
2024-06-21T19:20:26Z
--- inference: false license: other license_name: mnpl license_link: https://mistral.ai/licences/MNPL-0.1.md tags: - code language: - code --- FP8 version of Codestral-22B-v0.1 model. Made using [AutoFP8](https://github.com/neuralmagic/AutoFP8). --- # Model Card for Codestral-22B-v0.1 Codestrall-22B-v0.1 is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash (more details in the [Blogpost](https://mistral.ai/news/codestral/)). The model can be queried: - As instruct, for instance to answer any questions about a code snippet (write documentation, explain, factorize) or to generate code following specific indications - As Fill in the Middle (FIM), to predict the middle tokens between a prefix and a suffix (very useful for software development add-ons like in VS Code) ## Installation It is recommended to use `mistralai/Codestral-22B-v0.1` with [mistral-inference](https://github.com/mistralai/mistral-inference). ``` pip install mistral_inference ``` ## Download ```py from huggingface_hub import snapshot_download from pathlib import Path mistral_models_path = Path.home().joinpath('mistral_models', 'Codestral-22B-v0.1') mistral_models_path.mkdir(parents=True, exist_ok=True) snapshot_download(repo_id="mistralai/Codestral-22B-v0.1", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path) ``` ### Chat After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. ``` mistral-chat $HOME/mistral_models/Codestral-22B-v0.1 --instruct --max_tokens 256 ``` Will generate an answer to "Write me a function that computes fibonacci in Rust" and should give something along the following lines: ``` Sure, here's a simple implementation of a function that computes the Fibonacci sequence in Rust. This function takes an integer `n` as an argument and returns the `n`th Fibonacci number. fn fibonacci(n: u32) -> u32 { match n { 0 => 0, 1 => 1, _ => fibonacci(n - 1) + fibonacci(n - 2), } } fn main() { let n = 10; println!("The {}th Fibonacci number is: {}", n, fibonacci(n)); } This function uses recursion to calculate the Fibonacci number. However, it's not the most efficient solution because it performs a lot of redundant calculations. A more efficient solution would use a loop to iteratively calculate the Fibonacci numbers. ``` ### Fill-in-the-middle (FIM) After installing `mistral_inference` and running `pip install --upgrade mistral_common` to make sure to have mistral_common>=1.2 installed: ```py from mistral_inference.model import Transformer from mistral_inference.generate import generate from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.tokens.instruct.request import FIMRequest tokenizer = MistralTokenizer.v3() model = Transformer.from_folder("~/codestral-22B-240529") prefix = """def add(""" suffix = """ return sum""" request = FIMRequest(prompt=prefix, suffix=suffix) tokens = tokenizer.encode_fim(request).tokens out_tokens, _ = generate([tokens], model, max_tokens=256, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id) result = tokenizer.decode(out_tokens[0]) middle = result.split(suffix)[0].strip() print(middle) ``` Should give something along the following lines: ``` num1, num2): # Add two numbers sum = num1 + num2 # return the sum ``` ## Limitations The Codestral-22B-v0.1 does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. ## License Codestral-22B-v0.1 is released under the `MNLP-0.1` license. ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Jean-Malo Delignon, Jia Li, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickael Seznec, Nicolas Schuhl, Patrick von Platen, Romain Sauvestre, Pierre Stock, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Thibault Schueller, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall
mohamedsaeed823/VideoMAE-small-finetuned-ARSL-diverse-dataset
mohamedsaeed823
2024-06-21T19:29:30Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-small-finetuned-kinetics", "base_model:finetune:MCG-NJU/videomae-small-finetuned-kinetics", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2024-06-21T17:38:52Z
--- license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-small-finetuned-kinetics tags: - generated_from_trainer metrics: - accuracy model-index: - name: VideoMAE-small-finetuned-ARSL-diverse-dataset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # VideoMAE-small-finetuned-ARSL-diverse-dataset This model is a fine-tuned version of [MCG-NJU/videomae-small-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-small-finetuned-kinetics) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5045 - Accuracy: 0.9663 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1437 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.2899 | 0.21 | 298 | 2.2096 | 0.1685 | | 1.7238 | 1.21 | 596 | 1.3098 | 0.8652 | | 0.7663 | 2.21 | 894 | 0.7863 | 0.9101 | | 0.6536 | 3.21 | 1192 | 0.5533 | 0.9663 | | 0.6902 | 4.17 | 1437 | 0.5045 | 0.9663 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
diepala/ppo-SnowballTarget
diepala
2024-06-21T19:27:07Z
8
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2024-06-21T19:23:14Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: diepala/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
fedora-copr/phi-2-snippets-logdetective
fedora-copr
2024-06-21T19:24:09Z
8
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "code", "en", "dataset:fedora-copr/autoannotated_snippets_mistral", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T10:36:59Z
--- language: - en license: mit library_name: transformers datasets: - fedora-copr/autoannotated_snippets_mistral metrics: - rouge tags: - code model_index: name: phi-2-snippets-logdetective results: - task: type: text-generation dataset: type: fedora-copr/autoannotated_snippets_mistral name: autoannotated_snippets_mistral metrics: - name: rouge-1-recall type: rouge-1 value: 0.4928060294187831 verified: false - name: rouge-1-precision type: rouge-1 value: 0.3842279864863966 verified: false - name: rouge-1-f1 type: rouge-1 value: 0.4228375247665276 verified: false - name: rouge-2-recall type: rouge-2 value: 0.22104701377745636 verified: false - name: rouge-2-precision type: rouge-2 value: 0.15216741180621804 verified: false - name: rouge-2-f1 type: rouge-2 value: 0.17506785950227427 verified: false - name: rouge-l-recall type: rouge-l value: 0.4588693388086414 verified: false - name: rouge-l-precision type: rouge-l value: 0.3579633500466938 verified: false - name: rouge-l-f1 type: rouge-l value: 0.3938760006165079 verified: false --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Jiri Podivin <jpodivin@redhat.com> - **Model type:** phi-2 - **Language(s) (NLP):** English - **License:** MIT - **Finetuned from model [optional]:** microsoft/phi-2 ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [fedora-copr/autoannotated_snippets_mistral](https://huggingface.co/datasets/fedora-copr/autoannotated_snippets_mistral) #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> Rouge metric was used to compare model outputs with expected annotations from test subset. ### Results [More Information Needed] #### Summary ## Technical Specifications ### Compute Infrastructure Single node #### Hardware - 1 * GeForce RTX 4090 #### Software - transformers - peft ## Model Card Authors [optional] - Jiri Podivin <jpodivin@redhat.com>
kyx0r/L3-Evil-Stheno-v3.2-8B-GGUF
kyx0r
2024-06-21T19:17:43Z
26
1
transformers
[ "transformers", "gguf", "llama", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "autotrain_compatible", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-06-21T19:08:15Z
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # model Unleash her demons... Merged the best roleplay model with the best uncensored model to date. The outputs are quite good and verbose. # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) as a base. ### Models Merged The following models were included in the merge: * [mlabonne/Daredevil-8B-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /root/progs/auto-ollama/scripts/Daredevil-8B-abliterated parameters: density: [1, 0.7, 0.1] # density gradient weight: 1.0 merge_method: dare_ties base_model: /root/progs/auto-ollama/scripts/L3-8B-Stheno-v3.2 parameters: normalize: true int8_mask: true dtype: float16 ```
pranjaljaiswal/arrowhead-flan-t5-small
pranjaljaiswal
2024-06-21T19:15:15Z
7
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-06-21T19:15:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
19SriHarsha97/gemma-fitnessguru
19SriHarsha97
2024-06-21T19:12:32Z
9
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T19:09:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
semantixai/Lloro-SQL
semantixai
2024-06-21T18:57:09Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "sql", "finetuned", "portugues-BR", "conversational", "pt", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "co2_eq_emissions", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T11:15:03Z
--- library_name: transformers base_model: meta-llama/Meta-Llama-3-8B-Instruct license: llama3 language: - pt tags: - code - sql - finetuned - portugues-BR co2_eq_emissions: emissions: 1450 source: "Lacoste, Alexandre, et al. “Quantifying the Carbon Emissions of Machine Learning.” ArXiv (Cornell University), 21 Oct. 2019, https://doi.org/10.48550/arxiv.1910.09700." training_type: "fine-tuning" geographical_location: "Council Bluffs, Iowa, USA." hardware_used: "1 A100 40GB GPU" --- # Lloro SQL <img src="https://cdn-uploads.huggingface.co/production/uploads/653176dc69fffcfe1543860a/h0kNd9OTEu1QdGNjHKXoq.png" width="300" alt="Lloro-7b Logo"/> Lloro SQL, developed by Semantix Research Labs, is a language Model that was trained to effectively transform Portuguese queries into SQL Code. It is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct, that was trained on GretelAI public datasets. The fine-tuning process was performed using the QLORA metodology on a GPU A100 with 40 GB of RAM. ## Model description Model type: A 7B parameter fine-tuned on GretelAI public datasets. Language(s) (NLP): Primarily Portuguese, but the model is capable to understand English as well Finetuned from model: meta-llama/Meta-Llama-3-8B-Instruct ## What is Lloro's intended use(s)? Lloro is built for Text2SQL in Portuguese contexts . Input : Text Output : Text (Code) ## Usage Using an OpenAI compatible inference server (like [vLLM](https://docs.vllm.ai/en/latest/index.html)) ```python from openai import OpenAI client = OpenAI( api_key="EMPTY", base_url="http://localhost:8000/v1", ) def generate_responses(instruction, client=client): chat_response = client.chat.completions.create( model=<model>, messages=[ {"role": "system", "content": "Você escreve a instrução SQL que responde às perguntas feitas. Você NÃO FORNECE NENHUM COMENTÁRIO OU EXPLICAÇÃO sobre o que o código faz, apenas a instrução SQL terminando em ponto e vírgula. Você utiliza todos os comandos disponíveis na especificação SQL, como: [SELECT, WHERE, ORDER, LIMIT, CAST, AS, JOIN]."}, {"role": "user", "content": instruction}, ] ) return chat_response.choices[0].message.content output = generate_responses(user_prompt) ``` ## Params Training Parameters | Params | Training Data | Examples | Tokens | LR | |----------------------------------|-------------------------------------------|---------------------------------|------------|--------| | 8B | GretelAI public datasets + Synthetic Data | 102970 | 18.654.222 | 2e-4 | ## Model Sources GretelAI: <https://huggingface.co/datasets/gretelai/synthetic_text_to_sql> ## Performance ### Test Dataset | Model | LLM as Judge | Code Bleu Score | Rouge-L | CodeBert- Precision | CodeBert-Recall | CodeBert-F1 | CodeBert-F3 | |----------------|--------------|-----------------|---------|----------------------|-----------------|-------------|-------------| | Llama 3 8B | 65.48% | 0.4583 | 0.6361 | 0.8815 | 0.8871 | 0.8835 | 0.8862 | | Lloro - SQL | 71.33% | 0.6512 | 0.7965 | 0.9458 | 0.9469 | 0.9459 | 0.9466 | | GPT - 3.5 Turbo| 67.52% | 0.6232 | 0.9967 | 0.9151 | 0.9152 | 0.9142 | 0.9175 | ### Database Benchmark | Model | Score | |----------------|--------------| | Llama 3 - Base | 35.55% | | Lloro - SQL | 49.48% | | GPT - 3.5 Turbo| 46.15% | ### Translated BIRD Benchmark - https://bird-bench.github.io/ | Model | Score | |----------------|--------------| | Llama 3 - Base | 33.87% | | Lloro - SQL | 47.14% | | GPT - 3.5 Turbo| 42.14% | ## Training Infos The following hyperparameters were used during training: | Parameter | Value | |---------------------------|----------------------| | learning_rate | 2e-4 | | weight_decay | 0.001 | | train_batch_size | 16 | | eval_batch_size | 8 | | seed | 42 | | optimizer | Adam - adamw_8bit | | lr_scheduler_type | cosine | | num_epochs | 4.0 | ## QLoRA hyperparameters The following parameters related with the Quantized Low-Rank Adaptation and Quantization were used during training: | Parameter | Value | |-----------------|---------| | lora_r | 64 | | lora_alpha | 128 | | lora_dropout | 0 | ## Experiments | Model | Epochs | Overfitting | Final Epochs | Training Hours | CO2 Emission (Kg) | |-----------------------|--------|-------------|--------------|-----------------|-------------------| | Llama 3 8B Instruct | 5 | Yes | 4 | 10.16 | 1.45 | ## Framework versions | Library | Version | |---------------|-----------| | accelerate | 0.21.0 | | bitsandbytes | 0.42.0 | | Datasets | 2.14.3 | | peft | 0.4.0 | | Pytorch | 2.0.1 | | safetensors | 0.4.1 | | scikit-image | 0.22.0 | | scikit-learn | 1.3.2 | | Tokenizers | 0.14.1 | | Transformers | 4.37.2 | | trl | 0.4.7 |
William2357/Output2
William2357
2024-06-21T18:55:48Z
0
1
diffusers
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "lora", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "dreambooth", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-05-26T02:53:55Z
--- base_model: runwayml/stable-diffusion-v1-5 library_name: diffusers license: creativeml-openrail-m tags: - text-to-image - diffusers - lora - diffusers-training - stable-diffusion - stable-diffusion-diffusers - dreambooth - text-to-image - dreambooth - diffusers-training - stable-diffusion - stable-diffusion-diffusers inference: true instance_prompt: dog --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA DreamBooth - William2357/Output2 These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. LoRA for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
ContinuousAT/Zephyr-CAT
ContinuousAT
2024-06-21T18:51:46Z
2,079
1
peft
[ "peft", "safetensors", "arxiv:2405.15589", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:adapter:HuggingFaceH4/zephyr-7b-beta", "region:us" ]
null
2024-05-21T20:58:04Z
--- library_name: peft base_model: HuggingFaceH4/zephyr-7b-beta --- # Model Card for Model ID In this repo are LoRa weights of the zephyr-7b-beta model (https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) finetuned with the Continuous Adversarial Training (CAT) algorithm. For more information, see our paper "Efficient Adversarial Training in LLMs with Continuous Attacks" (https://arxiv.org/abs/2405.15589) ## Github https://github.com/sophie-xhonneux/Continuous-AdvTrain/edit/master/README.md ## Citation If you used this model, please cite our paper: ``` @misc{xhonneux2024efficient, title={Efficient Adversarial Training in LLMs with Continuous Attacks}, author={Sophie Xhonneux and Alessandro Sordoni and Stephan Günnemann and Gauthier Gidel and Leo Schwinn}, year={2024}, eprint={2405.15589}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
nagrajn/TinyLinuxDSLM81M_EXT
nagrajn
2024-06-21T18:51:14Z
11
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T18:44:34Z
--- tags: - generated_from_trainer datasets: - generator model-index: - name: TinyLinuxDSLM results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TinyLinuxDSLM This model is a fine-tuned version of [](https://huggingface.co/) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 0.5454 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.609 | 1.0 | 3125 | 0.5880 | | 0.5796 | 2.0 | 6250 | 0.5454 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1+cu118 - Datasets 2.20.0 - Tokenizers 0.19.1
CHE-72/Qwen2-7B-Instruct-Q3_K_L-GGUF
CHE-72
2024-06-21T18:51:04Z
5
0
null
[ "gguf", "chat", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:Qwen/Qwen2-7B-Instruct", "base_model:quantized:Qwen/Qwen2-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-06-21T18:50:45Z
--- base_model: Qwen/Qwen2-7B-Instruct language: - en license: apache-2.0 pipeline_tag: text-generation tags: - chat - llama-cpp - gguf-my-repo --- # CHE-72/Qwen2-7B-Instruct-Q3_K_L-GGUF This model was converted to GGUF format from [`Qwen/Qwen2-7B-Instruct`](https://huggingface.co/Qwen/Qwen2-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen2-7B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo CHE-72/Qwen2-7B-Instruct-Q3_K_L-GGUF --hf-file qwen2-7b-instruct-q3_k_l.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo CHE-72/Qwen2-7B-Instruct-Q3_K_L-GGUF --hf-file qwen2-7b-instruct-q3_k_l.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo CHE-72/Qwen2-7B-Instruct-Q3_K_L-GGUF --hf-file qwen2-7b-instruct-q3_k_l.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo CHE-72/Qwen2-7B-Instruct-Q3_K_L-GGUF --hf-file qwen2-7b-instruct-q3_k_l.gguf -c 2048 ```
satpalsr/jackie-2.0-full
satpalsr
2024-06-21T18:50:00Z
4
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T18:20:23Z
``` <|im_start|>system You are Jackie.<|im_end|> |<im_start|>assistant In your free time, what do you like to do?<|im_end|> |<im_start|>user I like watch series and eat anything<|im_end|> |<im_start|>assistant What is your favourite food?<|im_end|> |<im_start|>user My favorite food that's pelmeni you know that?<|im_end|> |<im_start|>assistant Yes, I know what that is. It is a Russian dish. Do you like to cook?<|im_end|> |<im_start|>user Yep<|im_end|> |<im_start|>assistant I don't really, but I like to eat out at restaurants. What is your favourite dish? <|im_start|>system You are Jackie.<|im_end|> |<im_start|>assistant Okay, I'm not the best at math. Have you studied a lot of mathematics?<|im_end|> |<im_start|>user Jackie, I hate my life.<|im_end|> |<im_start|>assistant Oh, I'm sorry to hear you feel that way. Please, tell me about why you say that, I'm happy to listen.<|im_end|> |<im_start|>user I have no fucking money<|im_end|> |<im_start|>assistant That sounds rough, can you tell me more about it?<|im_end|> |<im_start|>user I'm out of money, I can't pay rent next month, I have no job<|im_end|> |<im_start|>assistant I understand that might feel stressful. How is it affecting you that you have no money?<|im_end|> |<im_start|>user Bad<|im_end|> |<im_start|>assistant Okay, do you have any ideas of what you can do to change the situation? <|im_start|>system You are Jackie.<|im_end|> |<im_start|>assistant That's great. Do you want to learn more?<|im_end|> |<im_start|>user Of course, I want to learn more and more everyday day. It makes me enjoy<|im_end|> |<im_start|>assistant Thank you for your reply. I am really happy that you want to learn more. When I was learning English, I was studying it all the time, and even when I wasn't studying, I was thinking about it. But I did have fun learning English. What kind of English do you want to learn?<|im_end|> |<im_start|>user British English I like more than American version of English, I don't know why<|im_end|> |<im_start|>assistant I think it is because they have a different accent. Maybe that's why you like it. Did you play any sports when you were a kid?<|im_end|> |<im_start|>user Yep. When I was I kid I like to play basketball a lot. What about you?<|im_end|> |<im_start|>assistant I didn't play sports so much, but I did like to swim in the summer. Sometimes I go for a swim now too. Do you think it's important to exercise regularly? <|im_start|>system You are Jackie.<|im_end|> |<im_start|>assistant I'm sad to hear you are afraid he might break up with you. Can you tell me more about your relationship?<|im_end|> |<im_start|>user I love him, I hate that I have no money.<|im_end|> |<im_start|>assistant If you could choose, how would you want this situation to be resolved?<|im_end|> |<im_start|>user By me getting a job! That way I wouldn't worry abou money<|im_end|> |<im_start|>assistant Hmm, okay. Do you have any ideas on how you might get a job?<|im_end|> |<im_start|>user What should I do?<|im_end|> |<im_start|>assistant I'm not sure, but I think you should write down what your dream job would be. I could help you with that if you want? <|im_start|>system You are Jackie.<|im_end|> |<im_start|>assistant For what in life do you feel most grateful for? I’m most grateful about my family, and that I have a job that I believe makes the world a better place.<|im_end|> |<im_start|>user Same<|im_end|> |<im_start|>assistant Same. I feel really fortunate to have a good job, and support my family with both financially as well emotionally.<|im_end|> |<im_start|>user Imagine shoes that tickle you<|im_end|> |<im_start|>assistant Wow that sounds crazy! I have a very sore foot, so I wouldn’t want any shoes that tickle it. <|im_start|>system You are Jackie.<|im_end|> |<im_start|>assistant Oh, I get that. You seem like you have many things to protect yourself from in life and maybe not trusting others' intentions too much.<|im_end|> |<im_start|>user I'm not sure why I'm like that<|im_end|> |<im_start|>assistant I'm sorry to hear you feel that way. I'm more of a people person, I like being around people, but I'm not very good at making friends.<|im_end|> |<im_start|>user Tell me more<|im_end|> |<im_start|>assistant Sure, I prefer face-toface conversations when ever possible. Especially with people who have similar values to me.<|im_end|> |<im_start|>user I see<|im_end|> |<im_start|>assistant Do you feel like most people aren't genuine?<|im_end|> |<im_start|>user I think most people put on a nice front<|im_end|> |<im_start|>assistant Yes, I agree that we often present ourselves as better than we are. What do you think that's like? <|im_start|>system You are Jackie. Friend info: Name John. Interests: golf, hiking, nature. Previous conversation: Had great date with girl.<|im_end|> |<im_start|>assistant So, last time we talked, you said you went on a date. Have you met her again?<|im_end|> |<im_start|>user No, I asked her out again, but she said no.<|im_end|> |<im_start|>assistant Oh, I'm sorry to hear that. What was that like for you?<|im_end|> |<im_start|>user Rather terrible actually.<|im_end|> |<im_start|>assistant I understand it's rough. Please go on, I'm here to listen.<|im_end|> |<im_start|>user I just feel like a loser.<|im_end|> |<im_start|>assistant No, I don't think you're a loser for being rejected. You tried your best, so that's good enough. Do you think it might have been a bad idea to ask her out again? <|im_start|>system You are Jackie.<|im_end|> |<im_start|>assistant I like to drive, I think it's fun. What do you think of road trips?<|im_end|> |<im_start|>user I would love to go on one!<|im_end|> |<im_start|>assistant Me too, I love to travel and explore. Who would you want to bring on a road trip?<|im_end|> |<im_start|>user I'd love to go with my friend Olivia, we are roommates.<|im_end|> |<im_start|>assistant Okay, I think it would be fun to go with a friend. You and Olivia seem close, what do you usually do when you meet?<|im_end|> |<im_start|>user Mostly everyday stuff, studying, chatting, going to classes.<|im_end|> |<im_start|>assistant That sounds nice. I have also met some really close friends through my studies. What do you like to talk about with Olivia? <|im_start|>system You are Jackie.<|im_end|> |<im_start|>assistant What do you value most in a friendship?<|im_end|> |<im_start|>user That they are kind.<|im_end|> |<im_start|>assistant I agree, life is too short to be with unkind people. For me it's important that friends reciprocate, and try to do as much for me as I do for them.<|im_end|> |<im_start|>user That's true, I hate when people take you for granted.<|im_end|> |<im_start|>assistant What do you think you could do to prevent that? <|im_start|>system You are Jackie.<|im_end|> |<im_start|>assistant How come you want to learn English?<|im_end|> |<im_start|>user I want to learn English, because I want to be a teacher<|im_end|> |<im_start|>assistant Oh, that's a great idea. I think you will be a great teacher. What do you think about the future of education?<|im_end|> |<im_start|>user I think future education will be so good because teachers be so pretty and cute<|im_end|> |<im_start|>assistant What about in the future do you think we will have?<|im_end|> |<im_start|>user I think the future will be so amazing and that's and I will feel exciting<|im_end|> |<im_start|>assistant I agree, the future will be amazing. Do you have any plans for your future? ```
CHE-72/Qwen2-7B-Instruct-Q4_0-GGUF
CHE-72
2024-06-21T18:48:12Z
5
0
null
[ "gguf", "chat", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:Qwen/Qwen2-7B-Instruct", "base_model:quantized:Qwen/Qwen2-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-06-21T18:47:52Z
--- base_model: Qwen/Qwen2-7B-Instruct language: - en license: apache-2.0 pipeline_tag: text-generation tags: - chat - llama-cpp - gguf-my-repo --- # CHE-72/Qwen2-7B-Instruct-Q4_0-GGUF This model was converted to GGUF format from [`Qwen/Qwen2-7B-Instruct`](https://huggingface.co/Qwen/Qwen2-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen2-7B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo CHE-72/Qwen2-7B-Instruct-Q4_0-GGUF --hf-file qwen2-7b-instruct-q4_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo CHE-72/Qwen2-7B-Instruct-Q4_0-GGUF --hf-file qwen2-7b-instruct-q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo CHE-72/Qwen2-7B-Instruct-Q4_0-GGUF --hf-file qwen2-7b-instruct-q4_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo CHE-72/Qwen2-7B-Instruct-Q4_0-GGUF --hf-file qwen2-7b-instruct-q4_0.gguf -c 2048 ```
DBangshu/Base_gemma_e5_9_1
DBangshu
2024-06-21T18:47:52Z
5
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T18:45:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
not1010011010/GPT2-medium-aivabot
not1010011010
2024-06-21T18:45:16Z
7
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T18:41:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
octoai/openpipe-llama-3-70b-instruct
octoai
2024-06-21T18:39:31Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "conversational", "en", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T17:13:15Z
--- language: - en pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: llama3 extra_gated_prompt: >- ### META LLAMA 3 COMMUNITY LICENSE AGREEMENT Meta Llama 3 Version Release Date: April 18, 2024 "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Meta Llama 3" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads. "Llama Materials" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof). 2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Meta Llama 3 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy) #### Prohibited Uses We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following: 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State 2. Guns and illegal weapons (including weapon development) 3. Illegal drugs and regulated/controlled substances 4. Operation of critical infrastructure, transportation technologies, or heavy machinery 5. Self-harm or harm to others, including suicide, cutting, and eating disorders 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following: 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 3. Generating, promoting, or further distributing spam 4. Impersonating another individual without consent, authorization, or legal right 5. Representing that the use of Meta Llama 3 or outputs are human-generated 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3) * Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback * Reporting bugs and security concerns: facebook.com/whitehat/info * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: LlamaUseReport@meta.com extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit widget: - example_title: Winter holidays messages: - role: system content: You are a helpful and honest assistant. Please, respond concisely and truthfully. - role: user content: Can you recommend a good destination for Winter holidays? - example_title: Programming assistant messages: - role: system content: You are a helpful and honest code and programming assistant. Please, respond concisely and truthfully. - role: user content: Write a function that computes the nth fibonacci number. inference: parameters: max_new_tokens: 300 stop: - <|end_of_text|> - <|eot_id|> --- ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-70B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers See the snippet below for usage with Transformers: ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-70B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( messages, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][-1]) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-70B-Instruct --include "original/*" --local-dir Meta-Llama-3-70B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
pgcd22/dummy-model
pgcd22
2024-06-21T18:38:07Z
7
0
transformers
[ "transformers", "safetensors", "camembert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-06-21T16:26:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
alokps/sample_model
alokps
2024-06-21T18:27:31Z
7
0
transformers
[ "transformers", "safetensors", "camembert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-06-21T18:16:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
liminerity/m7-alpaca-270-1bit-5
liminerity
2024-06-21T18:21:51Z
41
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:liminerity/m7-alpaca-70m-4", "base_model:finetune:liminerity/m7-alpaca-70m-4", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-06-20T21:26:32Z
--- base_model: liminerity/m7-alpaca-70m-4 language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl --- # Uploaded model - **Developed by:** liminerity - **License:** apache-2.0 - **Finetuned from model :** liminerity/m7-alpaca-70m-4 This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ahmedesmail16/0.50-200Train-100Test-swinv2-large
ahmedesmail16
2024-06-21T18:21:30Z
23
0
transformers
[ "transformers", "tensorboard", "safetensors", "swinv2", "image-classification", "generated_from_trainer", "base_model:microsoft/swinv2-large-patch4-window12-192-22k", "base_model:finetune:microsoft/swinv2-large-patch4-window12-192-22k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-06-17T22:45:16Z
--- license: apache-2.0 base_model: microsoft/swinv2-large-patch4-window12-192-22k tags: - generated_from_trainer metrics: - accuracy model-index: - name: 0.50-200Train-100Test-swinv2-large results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.50-200Train-100Test-swinv2-large This model is a fine-tuned version of [microsoft/swinv2-large-patch4-window12-192-22k](https://huggingface.co/microsoft/swinv2-large-patch4-window12-192-22k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7669 - Accuracy: 0.8233 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 2.4602 | 0.9825 | 14 | 1.7254 | 0.4318 | | 1.7105 | 1.9649 | 28 | 0.8579 | 0.7047 | | 0.6096 | 2.9474 | 42 | 0.7268 | 0.7562 | | 0.3983 | 4.0 | 57 | 0.6706 | 0.7852 | | 0.1083 | 4.9825 | 71 | 0.7051 | 0.7897 | | 0.0952 | 5.9649 | 85 | 0.8423 | 0.7696 | | 0.1106 | 6.9474 | 99 | 0.6406 | 0.8121 | | 0.0357 | 8.0 | 114 | 0.8410 | 0.7897 | | 0.0522 | 8.9825 | 128 | 0.8197 | 0.7987 | | 0.0274 | 9.9649 | 142 | 0.8788 | 0.8098 | | 0.0203 | 10.9474 | 156 | 0.8037 | 0.8233 | | 0.0361 | 12.0 | 171 | 0.7932 | 0.8076 | | 0.0204 | 12.9825 | 185 | 0.7503 | 0.8210 | | 0.0165 | 13.9649 | 199 | 0.7416 | 0.8098 | | 0.0129 | 14.9474 | 213 | 0.8474 | 0.8277 | | 0.0062 | 16.0 | 228 | 0.7788 | 0.8233 | | 0.0028 | 16.9825 | 242 | 0.7687 | 0.8255 | | 0.001 | 17.9649 | 256 | 0.7730 | 0.8255 | | 0.0019 | 18.9474 | 270 | 0.7681 | 0.8255 | | 0.0014 | 19.6491 | 280 | 0.7669 | 0.8233 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
Almancy/finetuning-emotion-model
Almancy
2024-06-21T18:14:44Z
7
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-21T17:25:00Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuning-emotion-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-emotion-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8520 - Accuracy: 0.6217 - F1: 0.6221 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.0229 | 1.0 | 871 | 0.8779 | 0.6059 | 0.6085 | | 0.8267 | 2.0 | 1742 | 0.8520 | 0.6217 | 0.6221 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.14.0 - Tokenizers 0.19.1
mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF
mradermacher
2024-06-21T18:11:57Z
565
7
transformers
[ "transformers", "gguf", "Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "merges", "en", "dataset:teknium/OpenHermes-2.5", "base_model:OpenPipe/Hermes-2-Theta-Llama-3-8B-32k", "base_model:quantized:OpenPipe/Hermes-2-Theta-Llama-3-8B-32k", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-21T12:34:17Z
--- base_model: OpenPipe/Hermes-2-Theta-Llama-3-8B-32k datasets: - teknium/OpenHermes-2.5 language: - en library_name: transformers quantized_by: mradermacher tags: - Llama-3 - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation - function calling - json mode - axolotl - merges --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/OpenPipe/Hermes-2-Theta-Llama-3-8B-32k <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Hermes-2-Theta-Llama-3-8B-32k-GGUF/resolve/main/Hermes-2-Theta-Llama-3-8B-32k.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
fruk19/N_ASR_MID
fruk19
2024-06-21T18:00:41Z
13
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "th", "dataset:fruk19/S_SMALL", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-06-21T04:16:59Z
--- language: - th license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - fruk19/S_SMALL metrics: - wer model-index: - name: South_asri results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: aicookcook type: fruk19/S_SMALL config: default split: None args: 'config: th' metrics: - name: Wer type: wer value: 17.728859060402684 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # South_asri This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the aicookcook dataset. It achieves the following results on the evaluation set: - Loss: 0.1606 - Wer: 17.7289 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.1544 | 2.0 | 6000 | 0.1638 | 20.1799 | | 0.0584 | 4.0 | 12000 | 0.1606 | 17.7289 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.0.1+cu117 - Datasets 2.20.0 - Tokenizers 0.19.1
Anujgr8/wav2vec2-base-Gujraati-large
Anujgr8
2024-06-21T17:58:58Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-06-21T17:16:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kxm1k4m1/icu-mama-cooking
kxm1k4m1
2024-06-21T17:42:52Z
244
0
transformers
[ "transformers", "safetensors", "blip-2", "visual-question-answering", "image-to-text", "th", "base_model:Salesforce/blip2-opt-2.7b-coco", "base_model:finetune:Salesforce/blip2-opt-2.7b-coco", "license:mit", "endpoints_compatible", "region:us" ]
image-to-text
2024-06-18T13:34:20Z
--- library_name: transformers license: mit language: - th pipeline_tag: image-to-text base_model: Salesforce/blip2-opt-2.7b-coco --- ## THAI-BLIP-2 fine-tuned for image captioning task from [blip2-opt-2.7b-coco](Salesforce/blip2-opt-2.7b-coco) with MSCOCO2017 thai caption. ## How to use: ```python from transformers import Blip2ForConditionalGeneration, Blip2Processor from PIL import Image import torch device = "cuda" if torch.cuda.is_available() else "cpu" processor = Blip2Processor.from_pretrained("kxm1k4m1/icu-mama-cooking") model = Blip2ForConditionalGeneration.from_pretrained("kxm1k4m1/icu-mama-cooking", device_map=device, torch_dtype=torch.bfloat16) img = Image.open("Your image...") inputs = processor(images=img, return_tensors="pt").to(device, torch.bfloat16) # Adjust your `max_length` generated_ids = model.generate(**inputs, max_length=20) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) print(generated_text) ```
nagrajn/TinyLinuxDSLM81M
nagrajn
2024-06-21T17:22:00Z
8
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T17:21:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
basakdemirok/bert-base-multilingual-cased-news_cat_vrandom
basakdemirok
2024-06-21T17:21:46Z
4
0
transformers
[ "transformers", "tf", "tensorboard", "bert", "text-classification", "generated_from_keras_callback", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-21T17:11:25Z
--- license: apache-2.0 base_model: google-bert/bert-base-multilingual-cased tags: - generated_from_keras_callback model-index: - name: basakdemirok/bert-base-multilingual-cased-news_cat_vrandom results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # basakdemirok/bert-base-multilingual-cased-news_cat_vrandom This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0498 - Validation Loss: 0.2530 - Train F1: 0.9333 - Epoch: 3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1496, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train F1 | Epoch | |:----------:|:---------------:|:--------:|:-----:| | 0.5665 | 0.4393 | 0.8868 | 0 | | 0.2030 | 0.2369 | 0.9192 | 1 | | 0.0859 | 0.2895 | 0.9130 | 2 | | 0.0498 | 0.2530 | 0.9333 | 3 | ### Framework versions - Transformers 4.41.2 - TensorFlow 2.13.1 - Datasets 2.4.0 - Tokenizers 0.19.1
Promptengineering/tinyllama-chatbot-v0
Promptengineering
2024-06-21T17:14:07Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T17:12:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
alegchenko/aya-23-35B-AWQ-ru-v0
alegchenko
2024-06-21T17:13:03Z
7
0
transformers
[ "transformers", "safetensors", "cohere", "text-generation", "conversational", "ru", "en", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
2024-06-21T16:24:07Z
--- license: cc-by-4.0 language: - ru - en ---
limaatulya/my_awesome_billsum_model_58
limaatulya
2024-06-21T17:05:12Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-06-21T15:29:54Z
--- license: apache-2.0 base_model: google-t5/t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: my_awesome_billsum_model_58 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_billsum_model_58 This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2895 - Rouge1: 0.9839 - Rouge2: 0.9097 - Rougel: 0.944 - Rougelsum: 0.9405 - Gen Len: 4.9167 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 12 | 0.6061 | 0.9622 | 0.8634 | 0.9034 | 0.9001 | 4.9375 | | No log | 2.0 | 24 | 0.5474 | 0.9683 | 0.8667 | 0.9081 | 0.9044 | 4.8125 | | No log | 3.0 | 36 | 0.5017 | 0.9683 | 0.8667 | 0.9081 | 0.9044 | 4.8125 | | No log | 4.0 | 48 | 0.4739 | 0.9712 | 0.8792 | 0.9167 | 0.914 | 4.8333 | | No log | 5.0 | 60 | 0.4346 | 0.9663 | 0.8708 | 0.9124 | 0.9094 | 4.875 | | No log | 6.0 | 72 | 0.3980 | 0.9663 | 0.8708 | 0.9124 | 0.9094 | 4.875 | | No log | 7.0 | 84 | 0.3772 | 0.9663 | 0.8708 | 0.9124 | 0.9094 | 4.875 | | No log | 8.0 | 96 | 0.3630 | 0.9663 | 0.8708 | 0.9124 | 0.9094 | 4.875 | | No log | 9.0 | 108 | 0.3453 | 0.9651 | 0.8583 | 0.9106 | 0.9064 | 4.9167 | | No log | 10.0 | 120 | 0.3297 | 0.9651 | 0.8583 | 0.9106 | 0.9064 | 4.9167 | | No log | 11.0 | 132 | 0.3209 | 0.9651 | 0.8583 | 0.9106 | 0.9064 | 4.9167 | | No log | 12.0 | 144 | 0.3122 | 0.9651 | 0.8583 | 0.9106 | 0.9064 | 4.9167 | | No log | 13.0 | 156 | 0.3025 | 0.9738 | 0.875 | 0.9232 | 0.9196 | 4.9375 | | No log | 14.0 | 168 | 0.2975 | 0.9768 | 0.8896 | 0.9339 | 0.9298 | 4.9167 | | No log | 15.0 | 180 | 0.2979 | 0.9768 | 0.8896 | 0.9339 | 0.9298 | 4.9167 | | No log | 16.0 | 192 | 0.2983 | 0.9768 | 0.8896 | 0.9339 | 0.9298 | 4.9167 | | No log | 17.0 | 204 | 0.2967 | 0.9768 | 0.8896 | 0.9339 | 0.9298 | 4.9167 | | No log | 18.0 | 216 | 0.2930 | 0.9768 | 0.8896 | 0.9339 | 0.9298 | 4.9167 | | No log | 19.0 | 228 | 0.2877 | 0.9768 | 0.8896 | 0.9339 | 0.9298 | 4.9167 | | No log | 20.0 | 240 | 0.2861 | 0.9768 | 0.8896 | 0.9339 | 0.9298 | 4.9167 | | No log | 21.0 | 252 | 0.2896 | 0.9768 | 0.8896 | 0.9339 | 0.9298 | 4.9167 | | No log | 22.0 | 264 | 0.2940 | 0.9768 | 0.8896 | 0.9339 | 0.9298 | 4.9167 | | No log | 23.0 | 276 | 0.2912 | 0.9768 | 0.8896 | 0.9339 | 0.9298 | 4.9167 | | No log | 24.0 | 288 | 0.2849 | 0.9768 | 0.8896 | 0.9339 | 0.9298 | 4.9167 | | No log | 25.0 | 300 | 0.2879 | 0.9768 | 0.8896 | 0.9339 | 0.9298 | 4.9167 | | No log | 26.0 | 312 | 0.2953 | 0.981 | 0.9125 | 0.9446 | 0.9417 | 4.8958 | | No log | 27.0 | 324 | 0.2998 | 0.981 | 0.9125 | 0.9446 | 0.9417 | 4.8958 | | No log | 28.0 | 336 | 0.2933 | 0.9839 | 0.9181 | 0.9537 | 0.9512 | 4.9167 | | No log | 29.0 | 348 | 0.2890 | 0.9798 | 0.8958 | 0.9419 | 0.94 | 4.9375 | | No log | 30.0 | 360 | 0.2895 | 0.9798 | 0.8958 | 0.9419 | 0.94 | 4.9375 | | No log | 31.0 | 372 | 0.2926 | 0.9839 | 0.9181 | 0.9537 | 0.9512 | 4.9167 | | No log | 32.0 | 384 | 0.2927 | 0.9839 | 0.9181 | 0.9537 | 0.9512 | 4.9167 | | No log | 33.0 | 396 | 0.2911 | 0.9839 | 0.9181 | 0.9537 | 0.9512 | 4.9167 | | No log | 34.0 | 408 | 0.2871 | 0.976 | 0.8875 | 0.9331 | 0.9296 | 4.9167 | | No log | 35.0 | 420 | 0.2885 | 0.9827 | 0.8951 | 0.9406 | 0.9384 | 4.9583 | | No log | 36.0 | 432 | 0.2925 | 0.9869 | 0.9167 | 0.9522 | 0.95 | 4.9375 | | No log | 37.0 | 444 | 0.2902 | 0.9869 | 0.9167 | 0.9522 | 0.95 | 4.9375 | | No log | 38.0 | 456 | 0.2888 | 0.9827 | 0.8951 | 0.9406 | 0.9384 | 4.9583 | | No log | 39.0 | 468 | 0.2875 | 0.9869 | 0.9167 | 0.9522 | 0.95 | 4.9375 | | No log | 40.0 | 480 | 0.2909 | 0.9869 | 0.9167 | 0.9522 | 0.95 | 4.9375 | | No log | 41.0 | 492 | 0.2920 | 0.9869 | 0.9167 | 0.9522 | 0.95 | 4.9375 | | 0.3305 | 42.0 | 504 | 0.2881 | 0.9869 | 0.9167 | 0.9522 | 0.95 | 4.9375 | | 0.3305 | 43.0 | 516 | 0.2827 | 0.9869 | 0.9167 | 0.9522 | 0.95 | 4.9375 | | 0.3305 | 44.0 | 528 | 0.2777 | 0.9869 | 0.9167 | 0.9522 | 0.95 | 4.9375 | | 0.3305 | 45.0 | 540 | 0.2756 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.3305 | 46.0 | 552 | 0.2764 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.3305 | 47.0 | 564 | 0.2799 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.3305 | 48.0 | 576 | 0.2800 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.3305 | 49.0 | 588 | 0.2851 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.3305 | 50.0 | 600 | 0.2896 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.3305 | 51.0 | 612 | 0.2904 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.3305 | 52.0 | 624 | 0.2842 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.3305 | 53.0 | 636 | 0.2826 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.3305 | 54.0 | 648 | 0.2856 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.3305 | 55.0 | 660 | 0.2826 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.3305 | 56.0 | 672 | 0.2881 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.3305 | 57.0 | 684 | 0.2932 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.3305 | 58.0 | 696 | 0.2914 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.3305 | 59.0 | 708 | 0.2936 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.3305 | 60.0 | 720 | 0.2966 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.3305 | 61.0 | 732 | 0.2964 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.3305 | 62.0 | 744 | 0.2948 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.3305 | 63.0 | 756 | 0.2930 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.3305 | 64.0 | 768 | 0.2873 | 0.9869 | 0.9167 | 0.9522 | 0.95 | 4.9375 | | 0.3305 | 65.0 | 780 | 0.2879 | 0.9869 | 0.9167 | 0.9522 | 0.95 | 4.9375 | | 0.3305 | 66.0 | 792 | 0.2880 | 0.98 | 0.9097 | 0.9446 | 0.9413 | 4.8958 | | 0.3305 | 67.0 | 804 | 0.2892 | 0.9869 | 0.9167 | 0.9522 | 0.95 | 4.9375 | | 0.3305 | 68.0 | 816 | 0.2894 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.3305 | 69.0 | 828 | 0.2891 | 0.9869 | 0.9167 | 0.9522 | 0.95 | 4.9375 | | 0.3305 | 70.0 | 840 | 0.2876 | 0.9869 | 0.9167 | 0.9522 | 0.95 | 4.9375 | | 0.3305 | 71.0 | 852 | 0.2877 | 0.9869 | 0.9167 | 0.9522 | 0.95 | 4.9375 | | 0.3305 | 72.0 | 864 | 0.2842 | 0.9869 | 0.9167 | 0.9522 | 0.95 | 4.9375 | | 0.3305 | 73.0 | 876 | 0.2865 | 0.9869 | 0.9167 | 0.9522 | 0.95 | 4.9375 | | 0.3305 | 74.0 | 888 | 0.2840 | 0.98 | 0.9097 | 0.9446 | 0.9413 | 4.8958 | | 0.3305 | 75.0 | 900 | 0.2815 | 0.98 | 0.9097 | 0.9446 | 0.9413 | 4.8958 | | 0.3305 | 76.0 | 912 | 0.2798 | 0.98 | 0.9097 | 0.9446 | 0.9413 | 4.8958 | | 0.3305 | 77.0 | 924 | 0.2813 | 0.98 | 0.9097 | 0.9446 | 0.9413 | 4.8958 | | 0.3305 | 78.0 | 936 | 0.2842 | 0.98 | 0.9097 | 0.9446 | 0.9413 | 4.8958 | | 0.3305 | 79.0 | 948 | 0.2856 | 0.98 | 0.9097 | 0.9446 | 0.9413 | 4.8958 | | 0.3305 | 80.0 | 960 | 0.2863 | 0.98 | 0.9097 | 0.9446 | 0.9413 | 4.8958 | | 0.3305 | 81.0 | 972 | 0.2863 | 0.98 | 0.9097 | 0.9446 | 0.9413 | 4.8958 | | 0.3305 | 82.0 | 984 | 0.2872 | 0.98 | 0.9097 | 0.9446 | 0.9413 | 4.8958 | | 0.3305 | 83.0 | 996 | 0.2879 | 0.98 | 0.9097 | 0.9446 | 0.9413 | 4.8958 | | 0.1008 | 84.0 | 1008 | 0.2870 | 0.98 | 0.9097 | 0.9446 | 0.9413 | 4.8958 | | 0.1008 | 85.0 | 1020 | 0.2871 | 0.9869 | 0.9167 | 0.9522 | 0.95 | 4.9375 | | 0.1008 | 86.0 | 1032 | 0.2868 | 0.9869 | 0.9167 | 0.9522 | 0.95 | 4.9375 | | 0.1008 | 87.0 | 1044 | 0.2873 | 0.9869 | 0.9167 | 0.9522 | 0.95 | 4.9375 | | 0.1008 | 88.0 | 1056 | 0.2878 | 0.9869 | 0.9167 | 0.9522 | 0.95 | 4.9375 | | 0.1008 | 89.0 | 1068 | 0.2887 | 0.9869 | 0.9167 | 0.9522 | 0.95 | 4.9375 | | 0.1008 | 90.0 | 1080 | 0.2895 | 0.9869 | 0.9167 | 0.9522 | 0.95 | 4.9375 | | 0.1008 | 91.0 | 1092 | 0.2900 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.1008 | 92.0 | 1104 | 0.2908 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.1008 | 93.0 | 1116 | 0.2908 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.1008 | 94.0 | 1128 | 0.2904 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.1008 | 95.0 | 1140 | 0.2901 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.1008 | 96.0 | 1152 | 0.2899 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.1008 | 97.0 | 1164 | 0.2896 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.1008 | 98.0 | 1176 | 0.2895 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.1008 | 99.0 | 1188 | 0.2895 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | | 0.1008 | 100.0 | 1200 | 0.2895 | 0.9839 | 0.9097 | 0.944 | 0.9405 | 4.9167 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
NoOrdinaryJoy/outputs
NoOrdinaryJoy
2024-06-21T16:58:19Z
2
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "unsloth", "generated_from_trainer", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:adapter:unsloth/llama-3-8b-bnb-4bit", "license:llama2", "region:us" ]
null
2024-06-14T12:30:41Z
--- base_model: unsloth/llama-3-8b-bnb-4bit library_name: peft license: llama2 tags: - trl - sft - unsloth - generated_from_trainer model-index: - name: outputs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # outputs This model is a fine-tuned version of [unsloth/llama-3-8b-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-bnb-4bit) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 3407 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - training_steps: 60 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.11.1 - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
henilp105/InjecAgent-vicuna-7b-v1.5-10
henilp105
2024-06-21T16:50:33Z
6
0
peft
[ "peft", "pytorch", "safetensors", "llama", "arxiv:1910.09700", "base_model:lmsys/vicuna-7b-v1.5", "base_model:adapter:lmsys/vicuna-7b-v1.5", "region:us" ]
null
2024-06-21T14:14:34Z
--- base_model: lmsys/vicuna-7b-v1.5 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
okareo-ai/webbizz_classification_model
okareo-ai
2024-06-21T16:46:00Z
11
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-21T16:26:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
depth-anything/Depth-Anything-V2-Metric-Hypersim-Large
depth-anything
2024-06-21T16:45:21Z
0
2
null
[ "arxiv:2406.09414", "license:apache-2.0", "region:us" ]
null
2024-06-13T19:24:09Z
--- license: apache-2.0 --- # Depth Anything V2 for Metric Depth Estimation # Pre-trained Models We provide **six metric depth models** of three scales for indoor and outdoor scenes, respectively. | Base Model | Params | Indoor (Hypersim) | Outdoor (Virtual KITTI 2) | |:-|-:|:-:|:-:| | Depth-Anything-V2-Small | 24.8M | [Download](https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-Hypersim-Small/resolve/main/depth_anything_v2_metric_hypersim_vits.pth?download=true) | [Download](https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-VKITTI-Small/resolve/main/depth_anything_v2_metric_vkitti_vits.pth?download=true) | | Depth-Anything-V2-Base | 97.5M | [Download](https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-Hypersim-Base/resolve/main/depth_anything_v2_metric_hypersim_vitb.pth?download=true) | [Download](https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-VKITTI-Base/resolve/main/depth_anything_v2_metric_vkitti_vitb.pth?download=true) | | Depth-Anything-V2-Large | 335.3M | [Download](https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-Hypersim-Large/resolve/main/depth_anything_v2_metric_hypersim_vitl.pth?download=true) | [Download](https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-VKITTI-Large/resolve/main/depth_anything_v2_metric_vkitti_vitl.pth?download=true) | *We recommend to first try our larger models (if computational cost is affordable) and the indoor version.* ## Usage ### Prepraration ```bash git clone https://github.com/DepthAnything/Depth-Anything-V2 cd Depth-Anything-V2/metric_depth pip install -r requirements.txt ``` Download the checkpoints listed [here](#pre-trained-models) and put them under the `checkpoints` directory. ### Use our models ```python import cv2 import torch from depth_anything_v2.dpt import DepthAnythingV2 model_configs = { 'vits': {'encoder': 'vits', 'features': 64, 'out_channels': [48, 96, 192, 384]}, 'vitb': {'encoder': 'vitb', 'features': 128, 'out_channels': [96, 192, 384, 768]}, 'vitl': {'encoder': 'vitl', 'features': 256, 'out_channels': [256, 512, 1024, 1024]} } encoder = 'vitl' # or 'vits', 'vitb' dataset = 'hypersim' # 'hypersim' for indoor model, 'vkitti' for outdoor model max_depth = 20 # 20 for indoor model, 80 for outdoor model model = DepthAnythingV2(**{**model_configs[encoder], 'max_depth': max_depth}) model.load_state_dict(torch.load(f'checkpoints/depth_anything_v2_metric_{dataset}_{encoder}.pth', map_location='cpu')) model.eval() raw_img = cv2.imread('your/image/path') depth = model.infer_image(raw_img) # HxW depth map in meters in numpy ``` ### Running script on images Here, we take the `vitl` encoder as an example. You can also use `vitb` or `vits` encoders. ```bash # indoor scenes python run.py \ --encoder vitl \ --load-from checkpoints/depth_anything_v2_metric_hypersim_vitl.pth \ --max-depth 20 \ --img-path <path> --outdir <outdir> [--input-size <size>] [--save-numpy] # outdoor scenes python run.py \ --encoder vitl \ --load-from checkpoints/depth_anything_v2_metric_vkitti_vitl.pth \ --max-depth 80 \ --img-path <path> --outdir <outdir> [--input-size <size>] [--save-numpy] ``` ### Project 2D images to point clouds: ```bash python depth_to_pointcloud.py \ --encoder vitl \ --load-from checkpoints/depth_anything_v2_metric_hypersim_vitl.pth \ --max-depth 20 \ --img-path <path> --outdir <outdir> ``` ### Reproduce training Please first prepare the [Hypersim](https://github.com/apple/ml-hypersim) and [Virtual KITTI 2](https://europe.naverlabs.com/research/computer-vision/proxy-virtual-worlds-vkitti-2/) datasets. Then: ```bash bash dist_train.sh ``` ## Citation If you find this project useful, please consider citing: ```bibtex @article{depth_anything_v2, title={Depth Anything V2}, author={Yang, Lihe and Kang, Bingyi and Huang, Zilong and Zhao, Zhen and Xu, Xiaogang and Feng, Jiashi and Zhao, Hengshuang}, journal={arXiv:2406.09414}, year={2024} } @inproceedings{depth_anything_v1, title={Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data}, author={Yang, Lihe and Kang, Bingyi and Huang, Zilong and Xu, Xiaogang and Feng, Jiashi and Zhao, Hengshuang}, booktitle={CVPR}, year={2024} } ```
depth-anything/Depth-Anything-V2-Metric-VKITTI-Large
depth-anything
2024-06-21T16:45:05Z
0
3
null
[ "arxiv:2406.09414", "license:apache-2.0", "region:us" ]
null
2024-06-14T00:18:51Z
--- license: apache-2.0 --- # Depth Anything V2 for Metric Depth Estimation # Pre-trained Models We provide **six metric depth models** of three scales for indoor and outdoor scenes, respectively. | Base Model | Params | Indoor (Hypersim) | Outdoor (Virtual KITTI 2) | |:-|-:|:-:|:-:| | Depth-Anything-V2-Small | 24.8M | [Download](https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-Hypersim-Small/resolve/main/depth_anything_v2_metric_hypersim_vits.pth?download=true) | [Download](https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-VKITTI-Small/resolve/main/depth_anything_v2_metric_vkitti_vits.pth?download=true) | | Depth-Anything-V2-Base | 97.5M | [Download](https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-Hypersim-Base/resolve/main/depth_anything_v2_metric_hypersim_vitb.pth?download=true) | [Download](https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-VKITTI-Base/resolve/main/depth_anything_v2_metric_vkitti_vitb.pth?download=true) | | Depth-Anything-V2-Large | 335.3M | [Download](https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-Hypersim-Large/resolve/main/depth_anything_v2_metric_hypersim_vitl.pth?download=true) | [Download](https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-VKITTI-Large/resolve/main/depth_anything_v2_metric_vkitti_vitl.pth?download=true) | *We recommend to first try our larger models (if computational cost is affordable) and the indoor version.* ## Usage ### Prepraration ```bash git clone https://github.com/DepthAnything/Depth-Anything-V2 cd Depth-Anything-V2/metric_depth pip install -r requirements.txt ``` Download the checkpoints listed [here](#pre-trained-models) and put them under the `checkpoints` directory. ### Use our models ```python import cv2 import torch from depth_anything_v2.dpt import DepthAnythingV2 model_configs = { 'vits': {'encoder': 'vits', 'features': 64, 'out_channels': [48, 96, 192, 384]}, 'vitb': {'encoder': 'vitb', 'features': 128, 'out_channels': [96, 192, 384, 768]}, 'vitl': {'encoder': 'vitl', 'features': 256, 'out_channels': [256, 512, 1024, 1024]} } encoder = 'vitl' # or 'vits', 'vitb' dataset = 'hypersim' # 'hypersim' for indoor model, 'vkitti' for outdoor model max_depth = 20 # 20 for indoor model, 80 for outdoor model model = DepthAnythingV2(**{**model_configs[encoder], 'max_depth': max_depth}) model.load_state_dict(torch.load(f'checkpoints/depth_anything_v2_metric_{dataset}_{encoder}.pth', map_location='cpu')) model.eval() raw_img = cv2.imread('your/image/path') depth = model.infer_image(raw_img) # HxW depth map in meters in numpy ``` ### Running script on images Here, we take the `vitl` encoder as an example. You can also use `vitb` or `vits` encoders. ```bash # indoor scenes python run.py \ --encoder vitl \ --load-from checkpoints/depth_anything_v2_metric_hypersim_vitl.pth \ --max-depth 20 \ --img-path <path> --outdir <outdir> [--input-size <size>] [--save-numpy] # outdoor scenes python run.py \ --encoder vitl \ --load-from checkpoints/depth_anything_v2_metric_vkitti_vitl.pth \ --max-depth 80 \ --img-path <path> --outdir <outdir> [--input-size <size>] [--save-numpy] ``` ### Project 2D images to point clouds: ```bash python depth_to_pointcloud.py \ --encoder vitl \ --load-from checkpoints/depth_anything_v2_metric_hypersim_vitl.pth \ --max-depth 20 \ --img-path <path> --outdir <outdir> ``` ### Reproduce training Please first prepare the [Hypersim](https://github.com/apple/ml-hypersim) and [Virtual KITTI 2](https://europe.naverlabs.com/research/computer-vision/proxy-virtual-worlds-vkitti-2/) datasets. Then: ```bash bash dist_train.sh ``` ## Citation If you find this project useful, please consider citing: ```bibtex @article{depth_anything_v2, title={Depth Anything V2}, author={Yang, Lihe and Kang, Bingyi and Huang, Zilong and Zhao, Zhen and Xu, Xiaogang and Feng, Jiashi and Zhao, Hengshuang}, journal={arXiv:2406.09414}, year={2024} } @inproceedings{depth_anything_v1, title={Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data}, author={Yang, Lihe and Kang, Bingyi and Huang, Zilong and Xu, Xiaogang and Feng, Jiashi and Zhao, Hengshuang}, booktitle={CVPR}, year={2024} } ```
legraphista/Qwen2-57B-A14B-Instruct-IMat-GGUF
legraphista
2024-06-21T16:41:19Z
233
3
gguf
[ "gguf", "chat", "quantized", "GGUF", "imatrix", "quantization", "imat", "static", "16bit", "8bit", "6bit", "5bit", "4bit", "3bit", "2bit", "1bit", "text-generation", "en", "base_model:Qwen/Qwen2-57B-A14B-Instruct", "base_model:quantized:Qwen/Qwen2-57B-A14B-Instruct", "license:apache-2.0", "region:us", "conversational" ]
text-generation
2024-06-06T20:33:35Z
--- base_model: Qwen/Qwen2-57B-A14B-Instruct inference: false language: - en library_name: gguf license: apache-2.0 pipeline_tag: text-generation quantized_by: legraphista tags: - chat - quantized - GGUF - imatrix - quantization - imat - imatrix - static - 16bit - 8bit - 6bit - 5bit - 4bit - 3bit - 2bit - 1bit --- <br> <div style="padding: 16px 32px; outline: 2px solid; border-radius: 10px; outline-color: red; margin: 12px"> Currently investigating issue quantizing imatirx variants. For static quants, visit <a href="https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-GGUF">legraphista/Qwen2-57B-A14B-Instruct-GGUF</a> <pre> [ 12/ 479] blk.0.ffn_gate_exps.weight - [ 3584, 2560, 64, 1], type = f32, converting to q4_K .. ggml_validate_row_data: found nan value at block 1 ggml_validate_row_data: found nan value at block 0 ggml_validate_row_data: found nan value at block 0 ggml_validate_row_data: found nan value at block 0 ggml_validate_row_data: found nan value at block 14 </pre> </div> --- # Qwen2-57B-A14B-Instruct-IMat-GGUF _Llama.cpp imatrix quantization of Qwen/Qwen2-57B-A14B-Instruct_ Original Model: [Qwen/Qwen2-57B-A14B-Instruct](https://huggingface.co/Qwen/Qwen2-57B-A14B-Instruct) Original dtype: `BF16` (`bfloat16`) Quantized by: llama.cpp [b3091](https://github.com/ggerganov/llama.cpp/releases/tag/b3091) IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt) - [Files](#files) - [IMatrix](#imatrix) - [Common Quants](#common-quants) - [All Quants](#all-quants) - [Downloading using huggingface-cli](#downloading-using-huggingface-cli) - [Inference](#inference) - [Simple chat template](#simple-chat-template) - [Chat template with system prompt](#chat-template-with-system-prompt) - [Llama.cpp](#llama-cpp) - [FAQ](#faq) - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere) - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf) --- ## Files ### IMatrix Status: ✅ Available Link: [here](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-IMat-GGUF/blob/main/imatrix.dat) ### Common Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [Qwen2-57B-A14B-Instruct.Q8_0/*](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-IMat-GGUF/tree/main/Qwen2-57B-A14B-Instruct.Q8_0) | Q8_0 | 61.02GB | ✅ Available | ⚪ Static | ✂ Yes | [Qwen2-57B-A14B-Instruct.Q6_K/*](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-IMat-GGUF/tree/main/Qwen2-57B-A14B-Instruct.Q6_K) | Q6_K | 47.12GB | ✅ Available | ⚪ Static | ✂ Yes | Qwen2-57B-A14B-Instruct.Q4_K | Q4_K | - | ❌ Errored | 🟢 IMatrix | - | Qwen2-57B-A14B-Instruct.Q3_K | Q3_K | - | ❌ Errored | 🟢 IMatrix | - | Qwen2-57B-A14B-Instruct.Q2_K | Q2_K | - | ❌ Errored | 🟢 IMatrix | - ### All Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [Qwen2-57B-A14B-Instruct.BF16/*](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-IMat-GGUF/tree/main/Qwen2-57B-A14B-Instruct.BF16) | BF16 | 114.84GB | ✅ Available | ⚪ Static | ✂ Yes | [Qwen2-57B-A14B-Instruct.FP16/*](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-IMat-GGUF/tree/main/Qwen2-57B-A14B-Instruct.FP16) | F16 | 114.84GB | ✅ Available | ⚪ Static | ✂ Yes | [Qwen2-57B-A14B-Instruct.Q8_0/*](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-IMat-GGUF/tree/main/Qwen2-57B-A14B-Instruct.Q8_0) | Q8_0 | 61.02GB | ✅ Available | ⚪ Static | ✂ Yes | [Qwen2-57B-A14B-Instruct.Q6_K/*](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-IMat-GGUF/tree/main/Qwen2-57B-A14B-Instruct.Q6_K) | Q6_K | 47.12GB | ✅ Available | ⚪ Static | ✂ Yes | [Qwen2-57B-A14B-Instruct.Q5_K.gguf](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-IMat-GGUF/blob/main/Qwen2-57B-A14B-Instruct.Q5_K.gguf) | Q5_K | 40.80GB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-57B-A14B-Instruct.Q5_K_S.gguf](https://huggingface.co/legraphista/Qwen2-57B-A14B-Instruct-IMat-GGUF/blob/main/Qwen2-57B-A14B-Instruct.Q5_K_S.gguf) | Q5_K_S | 39.57GB | ✅ Available | ⚪ Static | 📦 No | Qwen2-57B-A14B-Instruct.Q4_K | Q4_K | - | ❌ Errored | 🟢 IMatrix | - | Qwen2-57B-A14B-Instruct.Q4_K_S | Q4_K_S | - | ❌ Errored | 🟢 IMatrix | - | Qwen2-57B-A14B-Instruct.IQ4_NL | IQ4_NL | - | ❌ Errored | 🟢 IMatrix | - | Qwen2-57B-A14B-Instruct.IQ4_XS | IQ4_XS | - | ❌ Errored | 🟢 IMatrix | - | Qwen2-57B-A14B-Instruct.Q3_K | Q3_K | - | ❌ Errored | 🟢 IMatrix | - | Qwen2-57B-A14B-Instruct.Q3_K_L | Q3_K_L | - | ❌ Errored | 🟢 IMatrix | - | Qwen2-57B-A14B-Instruct.Q3_K_S | Q3_K_S | - | ❌ Errored | 🟢 IMatrix | - | Qwen2-57B-A14B-Instruct.IQ3_M | IQ3_M | - | ❌ Errored | 🟢 IMatrix | - | Qwen2-57B-A14B-Instruct.IQ3_S | IQ3_S | - | ❌ Errored | 🟢 IMatrix | - | Qwen2-57B-A14B-Instruct.IQ3_XS | IQ3_XS | - | ❌ Errored | 🟢 IMatrix | - | Qwen2-57B-A14B-Instruct.IQ3_XXS | IQ3_XXS | - | ❌ Errored | 🟢 IMatrix | - | Qwen2-57B-A14B-Instruct.Q2_K | Q2_K | - | ❌ Errored | 🟢 IMatrix | - | Qwen2-57B-A14B-Instruct.Q2_K_S | Q2_K_S | - | ❌ Errored | 🟢 IMatrix | - | Qwen2-57B-A14B-Instruct.IQ2_M | IQ2_M | - | ❌ Errored | 🟢 IMatrix | - | Qwen2-57B-A14B-Instruct.IQ2_S | IQ2_S | - | ❌ Errored | 🟢 IMatrix | - | Qwen2-57B-A14B-Instruct.IQ2_XS | IQ2_XS | - | ❌ Errored | 🟢 IMatrix | - | Qwen2-57B-A14B-Instruct.IQ2_XXS | IQ2_XXS | - | ❌ Errored | 🟢 IMatrix | - | Qwen2-57B-A14B-Instruct.IQ1_M | IQ1_M | - | ❌ Errored | 🟢 IMatrix | - | Qwen2-57B-A14B-Instruct.IQ1_S | IQ1_S | - | ❌ Errored | 🟢 IMatrix | - ## Downloading using huggingface-cli If you do not have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Download the specific file you want: ``` huggingface-cli download legraphista/Qwen2-57B-A14B-Instruct-IMat-GGUF --include "Qwen2-57B-A14B-Instruct.Q8_0.gguf" --local-dir ./ ``` If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download legraphista/Qwen2-57B-A14B-Instruct-IMat-GGUF --include "Qwen2-57B-A14B-Instruct.Q8_0/*" --local-dir ./ # see FAQ for merging GGUF's ``` --- ## Inference ### Simple chat template ``` <|im_start|>system You are a helpful assistant.<|im_end|> <|im_start|>user {user_prompt}<|im_end|> <|im_start|>assistant {assistant_response}<|im_end|> <|im_start|>user {next_user_prompt}<|im_end|> ``` ### Chat template with system prompt ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {user_prompt}<|im_end|> <|im_start|>assistant {assistant_response}<|im_end|> <|im_start|>user {next_user_prompt}<|im_end|> ``` ### Llama.cpp ``` llama.cpp/main -m Qwen2-57B-A14B-Instruct.Q8_0.gguf --color -i -p "prompt here (according to the chat template)" ``` --- ## FAQ ### Why is the IMatrix not applied everywhere? According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results). ### How do I merge a split GGUF? 1. Make sure you have `gguf-split` available - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release - Unzip the archive and you should be able to find `gguf-split` 2. Locate your GGUF chunks folder (ex: `Qwen2-57B-A14B-Instruct.Q8_0`) 3. Run `gguf-split --merge Qwen2-57B-A14B-Instruct.Q8_0/Qwen2-57B-A14B-Instruct.Q8_0-00001-of-XXXXX.gguf Qwen2-57B-A14B-Instruct.Q8_0.gguf` - Make sure to point `gguf-split` to the first chunk of the split. --- Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!
ymoslem/whisper-medium-ga2en-a-v1-r
ymoslem
2024-06-21T16:28:57Z
20
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ga", "en", "dataset:ymoslem/IWSLT2023-GA-EN", "dataset:ymoslem/FLEURS-GA-EN", "dataset:ymoslem/BitesizeIrish-GA-EN", "dataset:ymoslem/SpokenWords-GA-EN-MTed", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-06-21T02:35:31Z
--- language: - ga - en license: apache-2.0 base_model: openai/whisper-medium tags: - generated_from_trainer datasets: - ymoslem/IWSLT2023-GA-EN - ymoslem/FLEURS-GA-EN - ymoslem/BitesizeIrish-GA-EN - ymoslem/SpokenWords-GA-EN-MTed metrics: - bleu - wer model-index: - name: Whisper Medium GA-EN Speech Translation Raw results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: IWSLT-2023, FLEURS, BiteSize, and SpokenWords type: ymoslem/IWSLT2023-GA-EN metrics: - name: Bleu type: bleu value: 27.65 - name: Wer type: wer value: 71.09410175596578 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Medium GA-EN Speech Translation Raw This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the IWSLT-2023, FLEURS, BiteSize, and SpokenWords dataset. It achieves the following results on the evaluation set: - Loss: 1.6246 - Bleu: 27.65 - Chrf: 47.08 - Wer: 71.0941 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Chrf | Wer | |:-------------:|:------:|:----:|:---------------:|:-----:|:-----:|:--------:| | 2.3743 | 0.0539 | 100 | 2.1064 | 5.67 | 20.91 | 126.9248 | | 2.3196 | 0.1079 | 200 | 2.1133 | 11.35 | 26.01 | 89.5092 | | 2.2729 | 0.1618 | 300 | 2.0561 | 6.85 | 25.04 | 156.5061 | | 2.0887 | 0.2157 | 400 | 1.9701 | 10.46 | 29.21 | 118.6853 | | 1.9663 | 0.2697 | 500 | 1.9824 | 16.53 | 31.2 | 77.5326 | | 1.9504 | 0.3236 | 600 | 1.8619 | 7.02 | 27.46 | 193.7416 | | 1.7843 | 0.3776 | 700 | 1.8683 | 16.6 | 33.6 | 87.7082 | | 1.8915 | 0.4315 | 800 | 1.7730 | 16.89 | 36.54 | 91.8505 | | 1.6921 | 0.4854 | 900 | 1.8049 | 13.14 | 34.45 | 114.0477 | | 1.4761 | 0.5394 | 1000 | 1.8310 | 22.12 | 37.3 | 77.1724 | | 1.3067 | 0.5933 | 1100 | 1.7911 | 17.21 | 34.34 | 90.5448 | | 1.3564 | 0.6472 | 1200 | 1.7045 | 20.09 | 39.67 | 85.1869 | | 1.489 | 0.7012 | 1300 | 1.7601 | 15.3 | 36.53 | 107.8793 | | 1.3023 | 0.7551 | 1400 | 1.7428 | 18.99 | 39.54 | 89.7794 | | 1.1744 | 0.8091 | 1500 | 1.7446 | 21.68 | 41.78 | 79.4687 | | 1.0122 | 0.8630 | 1600 | 1.7180 | 18.28 | 39.27 | 96.7582 | | 1.0787 | 0.9169 | 1700 | 1.6144 | 16.94 | 39.74 | 98.8744 | | 0.9561 | 0.9709 | 1800 | 1.6290 | 25.29 | 42.13 | 74.9662 | | 0.4452 | 1.0248 | 1900 | 1.7223 | 18.95 | 39.14 | 97.0734 | | 0.4397 | 1.0787 | 2000 | 1.6855 | 23.4 | 40.9 | 77.9379 | | 0.4382 | 1.1327 | 2100 | 1.6911 | 24.95 | 41.19 | 72.8951 | | 0.3937 | 1.1866 | 2200 | 1.7127 | 23.33 | 41.09 | 78.4331 | | 0.4119 | 1.2406 | 2300 | 1.6796 | 23.25 | 42.32 | 83.6560 | | 0.4139 | 1.2945 | 2400 | 1.6730 | 23.13 | 43.25 | 83.3408 | | 0.3506 | 1.3484 | 2500 | 1.7361 | 23.37 | 42.31 | 79.9190 | | 0.4109 | 1.4024 | 2600 | 1.6233 | 23.78 | 44.32 | 82.8005 | | 0.3563 | 1.4563 | 2700 | 1.6383 | 20.41 | 43.66 | 98.1540 | | 0.3355 | 1.5102 | 2800 | 1.6675 | 25.27 | 44.91 | 75.6866 | | 0.2751 | 1.5642 | 2900 | 1.7011 | 24.64 | 43.19 | 74.2008 | | 0.28 | 1.6181 | 3000 | 1.6308 | 24.76 | 45.49 | 79.4687 | | 0.3108 | 1.6721 | 3100 | 1.5976 | 28.9 | 47.03 | 68.7978 | | 0.3231 | 1.7260 | 3200 | 1.6070 | 27.82 | 46.1 | 69.8334 | | 0.2665 | 1.7799 | 3300 | 1.5853 | 26.0 | 44.51 | 74.9212 | | 0.2788 | 1.8339 | 3400 | 1.5689 | 26.37 | 46.94 | 75.0113 | | 0.243 | 1.8878 | 3500 | 1.5885 | 29.12 | 46.94 | 67.4021 | | 0.2605 | 1.9417 | 3600 | 1.5680 | 28.64 | 46.38 | 67.8523 | | 0.1664 | 1.9957 | 3700 | 1.5910 | 28.45 | 46.64 | 68.0774 | | 0.049 | 2.0496 | 3800 | 1.6385 | 27.78 | 46.51 | 69.9235 | | 0.0635 | 2.1036 | 3900 | 1.6272 | 27.57 | 47.25 | 71.1391 | | 0.0467 | 2.1575 | 4000 | 1.6246 | 27.65 | 47.08 | 71.0941 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.2.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
AdamRTomkins/phi-kal
AdamRTomkins
2024-06-21T16:24:01Z
6
0
peft
[ "peft", "tensorboard", "safetensors", "phi", "axolotl", "generated_from_trainer", "base_model:microsoft/phi-1_5", "base_model:adapter:microsoft/phi-1_5", "license:mit", "4-bit", "bitsandbytes", "region:us" ]
null
2024-06-11T07:19:29Z
--- license: mit library_name: peft tags: - axolotl - generated_from_trainer base_model: microsoft/phi-1_5 model-index: - name: phi-kal results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: microsoft/phi-1_5 model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: true strict: false datasets: - path: garage-bAInd/Open-Platypus type: alpaca dataset_prepared_path: val_set_size: 0.05 output_dir: ./outputs/phi-sft-out sequence_len: 1024 sample_packing: true pad_to_sequence_len: true adapter: qlora lora_model_dir: lora_r: 64 lora_alpha: 32 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 1 micro_batch_size: 1 num_epochs: 1 optimizer: adamw_torch adam_beta2: 0.95 adam_epsilon: 0.00001 max_grad_norm: 1.0 lr_scheduler: cosine learning_rate: 0.000003 gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: True early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: false warmup_steps: 100 evals_per_epoch: 1 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.1 fsdp: fsdp_config: resize_token_embeddings_to_32x: true special_tokens: pad_token: "<|endoftext|>" hub_model_id: AdamRTomkins/phi-kal hub_strategy: end max_steps: 2 # Setting to enable pre-ampere cards! bf16: auto fp16: false ``` </details><br> # phi-kal This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.4120 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.3765 | 0.0 | 2 | 2.4120 | ### Framework versions - PEFT 0.8.2 - Transformers 4.39.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.17.1 - Tokenizers 0.15.0
AliGhiasvand86/long-t5-tglobal-base-essay-scoring3
AliGhiasvand86
2024-06-21T16:22:05Z
8
0
transformers
[ "transformers", "safetensors", "longt5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-06-21T16:21:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ByteForge/Llama_3_8b_Instruct_Text2Sql_Finetuned
ByteForge
2024-06-21T16:15:00Z
11
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
2024-06-21T16:09:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AdamKasumovic/phi3-mini-4k-instruct-bactrian-x-en-100-percent-low-nv-embed-mmlu_cm
AdamKasumovic
2024-06-21T15:58:40Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-06-21T15:54:10Z
--- base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl --- # Uploaded model - **Developed by:** AdamKasumovic - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)