modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-09 06:31:45
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
550 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-09 06:31:30
card
stringlengths
11
1.01M
nlpzhaof/aligngpt-13b-pretrain
nlpzhaof
2024-06-29T03:59:36Z
5
0
transformers
[ "transformers", "aligngpt", "text-generation", "en", "arxiv:2405.14129", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T05:39:40Z
--- license: apache-2.0 language: - en --- # AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability [[Project Page](https://aligngpt-vl.github.io/)] [[Paper](https://arxiv.org/abs/2405.14129)] [[Demo](http://47.116.173.89:7870/)] [[Model](https://huggingface.co/nlpzhaof)] Authors: [Fei Zhao*](https://scholar.google.com/citations?user=V01xzWQAAAAJ&hl=zh-CN), Taotian Pang*, Chunhui Li, [Zhen Wu](https://scholar.google.com/citations?user=IoGlgtoAAAAJ&hl=zh-CN), Junjie Guo, Shangyu Xing, [Xinyu Dai](https://scholar.google.com/citations?user=zpWB1CgAAAAJ&hl=zh-CN) ## News and Updates - [5/24] 🔥 We released **AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability**. Checkout the [paper](https://arxiv.org/abs/2405.14129) and [demo](http://47.116.173.89:7870/). ## Model Zoo | Model | LLM | Vision Backbone | Pre-training | Instruct-tuning | |----------|----------|-----------|---|---| | AlignGPT-7B | [Vicuna 7B](https://huggingface.co/lmsys/vicuna-7b-v1.5) | [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14-336) |[aligngpt-7b-pretrain](https://huggingface.co/nlpzhaof/aligngpt-7b-pretrain/tree/main)| [aligngpt-7b](https://huggingface.co/nlpzhaof/aligngpt-7b/tree/main)| | AlignGPT-13B | [Vicuna 13B](https://huggingface.co/lmsys/vicuna-13b-v1.5) | [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14-336) |[aligngpt-13b-pretrain](https://huggingface.co/nlpzhaof/aligngpt-13b-pretrain/tree/main)| [aligngpt-13b](https://huggingface.co/nlpzhaof/aligngpt-13b/tree/main)| | AlignGPT-LLaMA2 | [LLaMA-2-7B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) | [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14-336) |To be released| To be released| | AlignGPT-LLaMA3 | [LLaMA-3-8B-Base](https://huggingface.co/meta-llama/Meta-Llama-3-8B) | [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14-336) |To be released|To be released| ## Performance | Model | VQAv2 | GQA | VizWiz | SQA | T-VQA | POPE | MME | MM-Bench | MM-Bench-CN | SEED | LLaVA-Bench-Wild | MM-Vet | |----------|---|---|---|---|---|---|---|---|---|---|---|---| | AlignGPT-7B | 79.1 | 62.9 | 54.2 | 68.5 | 58.4 | 86.0 | 1527.4 | 67.3 | 59.9 | 66.5 | 68.4 | 30.8 | | AlignGPT-13B | 80.0 | 63.6 | 56.4 | 70.3 | 60.2 | 86.2 | 1572.0 | 69.5 | 63.7 | 67.8 | 75.2 | 35.6 | ## Citation If you find AlignGPT useful for your research and applications, please cite using this BibTeX: ``` @misc{zhao2024aligngpt, title={AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability}, author={Fei Zhao and Taotian Pang and Chunhui Li and Zhen Wu and Junjie Guo and Shangyu Xing and Xinyu Dai}, year={2024}, eprint={2405.14129}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## License [![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)[![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE) The data and checkpoint is intended and licensed for research use only. They are also restricted to uses that follow the license agreement of LLaMA, Vicuna and GPT-4. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
netcat420/MFANN3bv0.14
netcat420
2024-06-29T03:27:13Z
5
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-29T03:19:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Koleshjr/mistral_7b_v2_q4_k_m_10_epochs
Koleshjr
2024-06-29T03:13:59Z
12
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "base_model:quantized:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-29T03:03:49Z
--- base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf --- # Uploaded model - **Developed by:** Koleshjr - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
chainup244/google-gemma-2b-1719630584
chainup244
2024-06-29T03:12:10Z
116
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-29T03:09:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
chainup244/Qwen-Qwen1.5-0.5B-1719629973
chainup244
2024-06-29T03:00:05Z
116
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-29T02:59:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
John6666/cocoa-mix-xl-v3-sdxl
John6666
2024-06-29T02:48:12Z
41
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-06-29T02:43:37Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime --- Original model is [here](https://civitai.com/models/530602/cocoamixxl?modelVersionId=605696).
youssefabdelmottaleb/Garbage-Classification-SWIN-Transformer
youssefabdelmottaleb
2024-06-29T02:48:01Z
212
0
transformers
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "base_model:microsoft/swin-tiny-patch4-window7-224", "base_model:finetune:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-06-28T23:30:52Z
--- license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer metrics: - accuracy model-index: - name: Garbage-Classification-SWIN-Transformer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Garbage-Classification-SWIN-Transformer This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0440 - Accuracy: 0.9900 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1969 | 0.9973 | 280 | 0.1740 | 0.9409 | | 0.1014 | 1.9982 | 561 | 0.0752 | 0.9755 | | 0.0333 | 2.9991 | 842 | 0.0551 | 0.9824 | | 0.0332 | 4.0 | 1123 | 0.0526 | 0.9845 | | 0.0218 | 4.9973 | 1403 | 0.0511 | 0.9866 | | 0.0086 | 5.9982 | 1684 | 0.0515 | 0.9873 | | 0.0057 | 6.9991 | 1965 | 0.0462 | 0.9875 | | 0.0043 | 8.0 | 2246 | 0.0453 | 0.9891 | | 0.0012 | 8.9973 | 2526 | 0.0460 | 0.9888 | | 0.0017 | 9.9733 | 2800 | 0.0440 | 0.9900 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
Koleshjr/mistral_7b_v2_8bit_q8_0_10_epochs
Koleshjr
2024-06-29T02:43:36Z
6
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "base_model:quantized:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-29T02:39:26Z
--- base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf --- # Uploaded model - **Developed by:** Koleshjr - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
chainup244/Qwen-Qwen1.5-1.8B-1719628883
chainup244
2024-06-29T02:43:08Z
116
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-29T02:41:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
chainup244/Qwen-Qwen1.5-0.5B-1719628741
chainup244
2024-06-29T02:39:33Z
116
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-29T02:39:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
12unique/animals
12unique
2024-06-29T02:35:02Z
195
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "pytorch", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-06-29T02:34:55Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: animals results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9821428656578064 --- # animals Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### cat ![cat](images/cat.jpg) #### cow ![cow](images/cow.jpg) #### dog ![dog](images/dog.jpg) #### horse ![horse](images/horse.jpg) #### lion ![lion](images/lion.jpg)
Zelyanoth/traduction_fon_french
Zelyanoth
2024-06-29T02:18:18Z
7
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "dataset:generator", "base_model:google/madlad400-3b-mt", "base_model:adapter:google/madlad400-3b-mt", "license:apache-2.0", "region:us" ]
null
2024-06-23T23:44:32Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: google/madlad400-3b-mt datasets: - generator metrics: - bleu model-index: - name: traduction_fon_french results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # traduction_fon_french This model is a fine-tuned version of [google/madlad400-3b-mt](https://huggingface.co/google/madlad400-3b-mt) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.2051 - Bleu: 4.2618 - Gen Len: 7.1747 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00016 - train_batch_size: 18 - eval_batch_size: 18 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 1.8931 | 1.0 | 4957 | 4.2563 | 4.206 | 7.2216 | | 1.8748 | 2.0 | 9914 | 4.2353 | 4.3796 | 7.1632 | | 1.9018 | 3.0 | 14871 | 4.2051 | 4.2618 | 7.1747 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
kmilesz/suzukii2
kmilesz
2024-06-29T01:57:30Z
4
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-06-29T01:57:28Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: A photo of suzukii output: url: image-0.png - text: A photo of suzukii output: url: image-1.png - text: A photo of suzukii output: url: image-2.png - text: A photo of suzukii output: url: image-3.png - text: A photo of suzukii output: url: image-4.png - text: A photo of suzukii output: url: image-5.png - text: A photo of suzukii output: url: image-6.png - text: A photo of suzukii output: url: image-7.png - text: A photo of suzukii output: url: image-8.png - text: A photo of suzukii output: url: image-9.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A photo of <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - kmilesz/suzukii2 <Gallery /> ## Model description ### These are kmilesz/suzukii2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`suzukii2.safetensors` here 💾](/kmilesz/suzukii2/blob/main/suzukii2.safetensors)**. - Place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:suzukii2:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). - *Embeddings*: download **[`suzukii2_emb.safetensors` here 💾](/kmilesz/suzukii2/blob/main/suzukii2_emb.safetensors)**. - Place it on it on your `embeddings` folder - Use it by adding `suzukii2_emb` to your prompt. For example, `A photo of suzukii2_emb` (you need both the LoRA and the embeddings as they were trained together for this LoRA) ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('kmilesz/suzukii2', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='kmilesz/suzukii2', filename='suzukii2_emb.safetensors' repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('A photo of <s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Details All [Files & versions](/kmilesz/suzukii2/tree/main). The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
BigHuggyD/cohereforai_c4ai-command-r-plus_exl2_5.5bpw_h8
BigHuggyD
2024-06-29T01:56:19Z
4
0
transformers
[ "transformers", "safetensors", "cohere", "text-generation", "conversational", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "exl2", "region:us" ]
text-generation
2024-06-29T01:26:29Z
--- inference: false license: cc-by-nc-4.0 library_name: transformers language: - en - fr - de - es - it - pt - ja - ko - zh - ar --- # Model Card for C4AI Command R+ 🚨 **This model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes [here](https://huggingface.co/CohereForAI/c4ai-command-r-plus-4bit)**. ## Model Summary C4AI Command R+ is an open weights research release of a 104B billion parameter model with highly advanced capabilities, this includes Retrieval Augmented Generation (RAG) and tool use to automate sophisticated tasks. The tool use in this model generation enables multi-step tool use which allows the model to combine multiple tools over multiple steps to accomplish difficult tasks. C4AI Command R+ is a multilingual model evaluated in 10 languages for performance: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Arabic, and Simplified Chinese. Command R+ is optimized for a variety of use cases including reasoning, summarization, and question answering. C4AI Command R+ is part of a family of open weight releases from Cohere For AI and Cohere. Our smaller companion model is [C4AI Command R](https://huggingface.co/CohereForAI/c4ai-command-r-v01) Developed by: [Cohere](https://cohere.com/) and [Cohere For AI](https://cohere.for.ai) - Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/) - License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy) - Model: c4ai-command-r-plus - Model Size: 104 billion parameters - Context length: 128K **Try C4AI Command R+** You can try out C4AI Command R+ before downloading the weights in our hosted [Hugging Face Space](https://huggingface.co/spaces/CohereForAI/c4ai-command-r-plus). **Usage** Please install `transformers` from the source repository that includes the necessary changes for this model. ```python # pip install 'git+https://github.com/huggingface/transformers.git' from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "CohereForAI/c4ai-command-r-plus" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) # Format message with the command-r-plus chat template messages = [{"role": "user", "content": "Hello, how are you?"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` **Quantized model through bitsandbytes, 8-bit precision** ```python # pip install 'git+https://github.com/huggingface/transformers.git' bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig bnb_config = BitsAndBytesConfig(load_in_8bit=True) model_id = "CohereForAI/c4ai-command-r-plus" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config) # Format message with the command-r-plus chat template messages = [{"role": "user", "content": "Hello, how are you?"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` **Quantized model through bitsandbytes, 4-bit precision** This model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes [here](https://huggingface.co/CohereForAI/c4ai-command-r-plus-4bit). ## Model Details **Input**: Models input text only. **Output**: Models generate text only. **Model Architecture**: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety. **Languages covered**: The model is optimized to perform well in the following languages: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic. Pre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian. **Context length**: Command R+ supports a context length of 128K. ## Evaluations Command R+ has been submitted to the [Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We include the results below, along with a direct comparison to the strongest state-of-art open weights models currently available on Hugging Face. We note that these results are only useful to compare when evaluations are implemented for all models in a [standardized way](https://github.com/EleutherAI/lm-evaluation-harness) using publically available code, and hence shouldn't be used for comparison outside of models submitted to the leaderboard or compared to self-reported numbers which can't be replicated in the same way. | Model | Average | Arc (Challenge) | Hella Swag | MMLU | Truthful QA | Winogrande | GSM8k | |:--------------------------------|----------:|------------------:|-------------:|-------:|--------------:|-------------:|--------:| | **CohereForAI/c4ai-command-r-plus** | 74.6 | 70.99 | 88.6 | 75.7 | 56.3 | 85.4 | 70.7 | | [DBRX Instruct](https://huggingface.co/databricks/dbrx-instruct) | 74.5 | 68.9 | 89 | 73.7 | 66.9 | 81.8 | 66.9 | | [Mixtral 8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 72.7 | 70.1 | 87.6 | 71.4 | 65 | 81.1 | 61.1 | | [Mixtral 8x7B Chat](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 72.6 | 70.2 | 87.6 | 71.2 | 64.6 | 81.4 | 60.7 | | [CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01) | 68.5 | 65.5 | 87 | 68.2 | 52.3 | 81.5 | 56.6 | | [Llama 2 70B](https://huggingface.co/meta-llama/Llama-2-70b-hf) | 67.9 | 67.3 | 87.3 | 69.8 | 44.9 | 83.7 | 54.1 | | [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) | 65.3 | 65.4 | 84.2 | 74.9 | 55.4 | 80.1 | 31.9 | | [Gemma-7B](https://huggingface.co/google/gemma-7b) | 63.8 | 61.1 | 82.2 | 64.6 | 44.8 | 79 | 50.9 | | [LLama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) | 62.4 | 64.6 | 85.9 | 63.9 | 52.8 | 80.5 | 26.7 | | [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 61 | 60 | 83.3 | 64.2 | 42.2 | 78.4 | 37.8 | We include these metrics here because they are frequently requested, but note that these metrics do not capture RAG, multilingual, tooling performance or the evaluation of open ended generations which we believe Command R+ to be state-of-art at. For evaluations of RAG, multilingual and tooling read more [here](https://txt.cohere.com/command-r-plus-microsoft-azure/). For evaluation of open ended generation, Command R+ is currently being evaluated on the [chatbot arena](https://chat.lmsys.org/). ### Tool use & multihop capabilities: Command R+ has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance, but we encourage experimentation. Command R+’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R+ may use one of its supplied tools more than once. The model has been trained to recognise a special `directly_answer` tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions. We recommend including the `directly_answer` tool, but it can be removed or renamed if required. Comprehensive documentation for working with command R+'s tool use prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r). The code snippet below shows a minimal working example on how to render a prompt. <details> <summary><b>Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]</b> </summary> ```python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-plus" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # Define tools available for the model to use: tools = [ { "name": "internet_search", "description": "Returns a list of relevant document snippets for a textual query retrieved from the internet", "parameter_definitions": { "query": { "description": "Query to search the internet with", "type": 'str', "required": True } } }, { 'name': "directly_answer", "description": "Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history", 'parameter_definitions': {} } ] # render the tool use prompt as a string: tool_use_prompt = tokenizer.apply_tool_use_template( conversation, tools=tools, tokenize=False, add_generation_prompt=True, ) print(tool_use_prompt) ``` </details> <details> <summary><b>Example Rendered Tool Use Prompt [CLICK TO EXPAND]</b></summary> ```` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling. ## Available Tools Here is a list of tools that you have available to you: ```python def internet_search(query: str) -> List[Dict]: """Returns a list of relevant document snippets for a textual query retrieved from the internet Args: query (str): Query to search the internet with """ pass ``` ```python def directly_answer() -> List[Dict]: """Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history """ pass ```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Write 'Action:' followed by a json-formatted list of actions that you want to perform in order to produce a good response to the user's last input. You can use any of the supplied tools any number of times, but you should aim to execute the minimum number of necessary actions for the input. You should use the `directly-answer` tool if calling the other tools is unnecessary. The list of actions you want to call should be formatted as a list of json objects, for example: ```json [ { "tool_name": title of the tool in the specification, "parameters": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters } ]```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Tool Use Completion [CLICK TO EXPAND]</b></summary> ```` Action: ```json [ { "tool_name": "internet_search", "parameters": { "query": "biggest penguin in the world" } } ] ``` ```` </details> ### Grounded Generation and RAG Capabilities: Command R+ has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG). This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation. Command R+’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured. By default, Command R+ will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as `accurate` grounded generation. The model is trained with a number of other answering modes, which can be selected by prompt changes. A `fast` citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens. Comprehensive documentation for working with Command R+'s grounded generation prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r). The code snippet below shows a minimal working example on how to render a prompt. <details> <summary> <b>Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]</b> </summary> ````python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-plus" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # define documents to ground on: documents = [ { "title": "Tall penguins", "text": "Emperor penguins are the tallest growing up to 122 cm in height." }, { "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica."} ] # render the tool use prompt as a string: grounded_generation_prompt = tokenizer.apply_grounded_generation_template( conversation, documents=documents, citation_mode="accurate", # or "fast" tokenize=False, add_generation_prompt=True, ) print(grounded_generation_prompt) ```` </details> <details> <summary><b>Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]</b></summary> ````<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|><results> Document: 0 title: Tall penguins text: Emperor penguins are the tallest growing up to 122 cm in height. Document: 1 title: Penguin habitats text: Emperor penguins only live in Antarctica. </results><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Carefully perform the following instructions, in order, starting each with a new line. Firstly, Decide which of the retrieved documents are relevant to the user's last input by writing 'Relevant Documents:' followed by comma-separated list of document numbers. If none are relevant, you should instead write 'None'. Secondly, Decide which of the retrieved documents contain facts that should be cited in a good answer to the user's last input by writing 'Cited Documents:' followed a comma-separated list of document numbers. If you dont want to cite any of them, you should instead write 'None'. Thirdly, Write 'Answer:' followed by a response to the user's last input in high quality natural english. Use the retrieved documents to help you. Do not insert any citations or grounding markup. Finally, Write 'Grounded answer:' followed by a response to the user's last input in high quality natural english. Use the symbols <co: doc> and </co: doc> to indicate when a fact comes from a document in the search result, e.g <co: 0>my fact</co: 0> for a fact from document 0.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Grounded Generation Completion [CLICK TO EXPAND]</b></summary> ```` Relevant Documents: 0,1 Cited Documents: 0,1 Answer: The Emperor Penguin is the tallest or biggest penguin in the world. It is a bird that lives only in Antarctica and grows to a height of around 122 centimetres. Grounded answer: The <co: 0>Emperor Penguin</co: 0> is the <co: 0>tallest</co: 0> or biggest penguin in the world. It is a bird that <co: 1>lives only in Antarctica</co: 1> and <co: 0>grows to a height of around 122 centimetres.</co: 0> ```` </details> ### Code Capabilities: Command R+ has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions. ### Model Card Contact For errors or additional questions about details in this model card, contact [info@for.ai](mailto:info@for.ai). ### Terms of Use: We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 104 billion parameter model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy). ### Try Chat: You can try Command R+ chat in the playground [here](https://dashboard.cohere.com/playground/chat). You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/c4ai-command-r-plus).
billa1972/layoutlmv3-violations-test
billa1972
2024-06-29T01:26:59Z
108
0
transformers
[ "transformers", "safetensors", "layoutlmv3", "token-classification", "generated_from_trainer", "dataset:violations", "base_model:microsoft/layoutlmv3-base", "base_model:finetune:microsoft/layoutlmv3-base", "license:cc-by-nc-sa-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-06-29T01:14:58Z
--- license: cc-by-nc-sa-4.0 base_model: microsoft/layoutlmv3-base tags: - generated_from_trainer datasets: - violations metrics: - precision - recall - f1 - accuracy model-index: - name: layoutlmv3-violations-test results: - task: name: Token Classification type: token-classification dataset: name: violations type: violations config: ViolationsExtraction split: test args: ViolationsExtraction metrics: - name: Precision type: precision value: 0.9482758620689655 - name: Recall type: recall value: 0.9116022099447514 - name: F1 type: f1 value: 0.9295774647887324 - name: Accuracy type: accuracy value: 0.9502762430939227 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv3-violations-test This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the violations dataset. It achieves the following results on the evaluation set: - Loss: 0.3685 - Precision: 0.9483 - Recall: 0.9116 - F1: 0.9296 - Accuracy: 0.9503 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 9.0909 | 100 | 0.2997 | 0.9543 | 0.9227 | 0.9382 | 0.9558 | | No log | 18.1818 | 200 | 0.3729 | 0.9425 | 0.9061 | 0.9239 | 0.9448 | | No log | 27.2727 | 300 | 0.3408 | 0.9543 | 0.9227 | 0.9382 | 0.9558 | | No log | 36.3636 | 400 | 0.3566 | 0.9483 | 0.9116 | 0.9296 | 0.9503 | | 0.0997 | 45.4545 | 500 | 0.3685 | 0.9483 | 0.9116 | 0.9296 | 0.9503 | | 0.0997 | 54.5455 | 600 | 0.3736 | 0.9483 | 0.9116 | 0.9296 | 0.9503 | | 0.0997 | 63.6364 | 700 | 0.3866 | 0.9483 | 0.9116 | 0.9296 | 0.9503 | | 0.0997 | 72.7273 | 800 | 0.3990 | 0.9483 | 0.9116 | 0.9296 | 0.9503 | | 0.0997 | 81.8182 | 900 | 0.4018 | 0.9483 | 0.9116 | 0.9296 | 0.9503 | | 0.001 | 90.9091 | 1000 | 0.3979 | 0.9483 | 0.9116 | 0.9296 | 0.9503 | ### Framework versions - Transformers 4.42.1 - Pytorch 2.3.1+cu118 - Datasets 2.20.0 - Tokenizers 0.19.1
mpasila/faster-whisper-large-finnish-v3
mpasila
2024-06-29T01:21:39Z
14
2
transformers
[ "transformers", "whisper-event", "finnish", "speech-recognition", "automatic-speech-recognition", "fi", "dataset:mozilla-foundation/common_voice_11_0", "dataset:google/fleurs", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-06-29T00:43:24Z
--- language: - fi license: apache-2.0 tags: - whisper-event - finnish - speech-recognition datasets: - mozilla-foundation/common_voice_11_0 - google/fleurs metrics: - wer - cer model-index: - name: Whisper Large V3 Finnish results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 config: fi split: test args: fi metrics: - name: Wer type: wer value: 8.23 - name: Cer type: cer value: 1.43 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: FLEURS type: google/fleurs config: fi_fi split: test args: fi_fi metrics: - name: Wer type: wer value: 8.21 - name: Cer type: cer value: 3.23 library_name: transformers pipeline_tag: automatic-speech-recognition --- # This is a conversion of [Finnish-NLP/whisper-large-finnish-v3](https://huggingface.co/Finnish-NLP/whisper-large-finnish-v3) into faster-whisper format. <h3>This is our improved Whisper v3 model that is now finetuned from OpenAI Whisper Large V3 </h3> <p>We improve from our previously finetuned Whisper V2 model in the following manner<a>https://huggingface.co/Finnish-NLP/whisper-large-v2-finnish</a> </p> <p>CV11 (Common Voice 11 test set) WER (Word error rate) 10.42 --> 8.23</p> <p>Fleurs (A speech recognition test set by Google) WER (Word error rate) 10.20 --> 8.21</p> <p>Model was trained on Nvidia RTX4080 for 32k steps with batch size 8, gradient accumulation 2</p> <br> <h3> Original OpenAI Whisper Large V3</h3> - CV11 - WER: 14.81 - WER NORMALIZED: 10.82 - CER: 2.7 - CER NORMALIZED: 2.07 - Fleurs - WER: 12.04 - WER NORMALIZED: 9.63 - CER: 2.48 - CER NORMALIZED: 3.64 <h3> After Finetuning with Finnish data our V3 got these scores on the test set:</h3> - @14000 finetuning steps - CV11 - WER: 11.36 - WER NORMALIZED: 8.31 - CER: 1.93 - CER NORMALIZED: 1.48 - Fleurs - WER: 10.2 - WER NORMALIZED: 8.56 - CER: 2.26 - CER NORMALIZED: 3.54 - @32000 finetuning steps - CV11 - WER: 11.47 - WER NORMALIZED: 8.23 - CER: 1.91 - CER NORMALIZED: 1.43 - Fleurs - WER: 10.1 - WER NORMALIZED: 8.21 - CER: 2.2 - CER NORMALIZED: 3.23
Xu-Ouyang/pythia-12b-deduped-int4-step107000-GPTQ-wikitext2
Xu-Ouyang
2024-06-29T01:05:54Z
78
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-06-29T01:02:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Fathima-Firose/Inspectra-Sum
Fathima-Firose
2024-06-29T01:02:07Z
106
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-06-29T01:01:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
grocode87/replyability_model
grocode87
2024-06-29T00:56:28Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-29T00:56:23Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # grocode87/replyability_model This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('grocode87/replyability_model') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=grocode87/replyability_model) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 119 with parameters: ``` {'batch_size': 200, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss` Parameters of the fit()-Method: ``` { "epochs": 30, "evaluation_steps": 0, "evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Sharan1712/llama2_7B_oasst_qlora_4bit_1e
Sharan1712
2024-06-29T00:51:27Z
76
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-06-29T00:49:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
John6666/mala-anime-mix-nsfw-pony-xl-v5-sdxl
John6666
2024-06-29T00:47:45Z
19,258
13
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "pony", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-06-29T00:41:09Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - pony --- Original model is [here](https://civitai.com/models/442163/mala-anime-mix-nsfw-ponyxl?modelVersionId=604755).
John6666/3x3x3mixxl-v2-sdxl
John6666
2024-06-29T00:35:13Z
14,515
3
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "pony", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-06-29T00:22:29Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - pony --- Original model is [here](https://civitai.com/models/464044?modelVersionId=605542).
opencsg/csg-wukong-code-1B-cpt
opencsg
2024-06-29T00:00:33Z
124
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-28T17:54:24Z
--- language: - en pipeline_tag: text-generation tags: - code license: apache-2.0 --- # **csg-wukong-code-1B-cpt** [[中文]](#chinese) [[English]](#english) <a id="english"></a> <p align="center"> <img width="900px" alt="OpenCSG" src="./csg-wukong-logo-green.jpg"> </p> <p align="center"><a href="https://portal.opencsg.com/models">[OpenCSG Community]</a> <a href="https://github.com/OpenCSGs/Awesome-SLMs">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[wechat]</a> <a href="https://twitter.com/OpenCsg">[Twitter]</a> </p> </div> OpenCSG stands for Converged resources, Software refinement, and Generative LM. The 'C' represents Converged resources, indicating the integration and full utilization of hybrid resources. The 'S' stands for Software refinement, signifying software that is refined by large models. The 'G' represents Generative LM, which denotes widespread, inclusive, and democratized generative large models. The vision of OpenCSG is to empower every industry, every company, and every individual to own their models. We adhere to the principles of openness and open source, making the large model software stack of OpenCSG available to the community. We welcome everyone to use, send feedback, and contribute collaboratively. ## Model Description **csg-wukong-1B-code-cpt** is a 1 billion-parameter small language model(SLM) continue pretrained based on [csg-wukong-1B](https://huggingface.co/opencsg/csg-wukong-1B). <br> we will introduce more information about csg-wukong-code-1B-cpt. # Training ## Hardware - **GPUs:** 16 H800 - **Training time:** 5days ## Software - **Orchestration:** [Deepspeed](https://github.com/OpenCSGs) - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) - **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex) <a id="chinese"></a> <p> </p> # OpenCSG介绍 <p align="center"> <img width="300px" alt="OpenCSG" src="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/GwYXPKuEoGCGcMICeW-sb.jpeg"> </p> <p align="center"><a href="https://opencsg.com/models">[OpenCSG 社区]</a> <a href="https://github.com/OpenCSGs/Awesome-SLMs">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[微信]</a> <a href="https://twitter.com/OpenCsg">[推特]</a> </p> </div> OpenCSG中 Open是开源开放;C 代表 Converged resources,整合和充分利用的混合异构资源优势,算力降本增效;S 代表 Software refined,重新定义软件的交付方式,通过大模型驱动软件开发,人力降本增效;G 代表 Generative LM,大众化、普惠化和民主化的可商用的开源生成式大模型。 OpenCSG的愿景是让每个行业、每个公司、每个人都拥有自己的模型。 我们坚持开源开放的原则,将OpenCSG的大模型软件栈开源到社区,欢迎使用、反馈和参与共建,欢迎关注。 ## 模型介绍 **csg-wukong-1B-code-cpt** 是一个1B参数量的小语言模型,该模型是在[csg-wukong-1B](https://huggingface.co/opencsg/csg-wukong-1B),二次预训练二成 <br> 我们将在后面介绍更多关于这个模型的信息。 # 训练 ## 硬件资源 - **GPU数量:** 16 H800 - **训练时间:** 5天 ## 软件使用 - **微调训练框架:** [Deepspeed](https://github.com/OpenCSGs) - **深度学习框架:** [PyTorch](https://github.com/pytorch/pytorch) - **BP16:** [apex](https://github.com/NVIDIA/apex)
RichardErkhov/T3Q-LLM-Product_-_T3Q-LLM1-Solar-10.8B-v1.0-gguf
RichardErkhov
2024-06-28T23:54:11Z
11
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-28T18:52:15Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) T3Q-LLM1-Solar-10.8B-v1.0 - GGUF - Model creator: https://huggingface.co/T3Q-LLM-Product/ - Original model: https://huggingface.co/T3Q-LLM-Product/T3Q-LLM1-Solar-10.8B-v1.0/ | Name | Quant method | Size | | ---- | ---- | ---- | | [T3Q-LLM1-Solar-10.8B-v1.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/T3Q-LLM-Product_-_T3Q-LLM1-Solar-10.8B-v1.0-gguf/blob/main/T3Q-LLM1-Solar-10.8B-v1.0.Q2_K.gguf) | Q2_K | 3.77GB | | [T3Q-LLM1-Solar-10.8B-v1.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/T3Q-LLM-Product_-_T3Q-LLM1-Solar-10.8B-v1.0-gguf/blob/main/T3Q-LLM1-Solar-10.8B-v1.0.IQ3_XS.gguf) | IQ3_XS | 4.18GB | | [T3Q-LLM1-Solar-10.8B-v1.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/T3Q-LLM-Product_-_T3Q-LLM1-Solar-10.8B-v1.0-gguf/blob/main/T3Q-LLM1-Solar-10.8B-v1.0.IQ3_S.gguf) | IQ3_S | 4.41GB | | [T3Q-LLM1-Solar-10.8B-v1.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/T3Q-LLM-Product_-_T3Q-LLM1-Solar-10.8B-v1.0-gguf/blob/main/T3Q-LLM1-Solar-10.8B-v1.0.Q3_K_S.gguf) | Q3_K_S | 4.39GB | | [T3Q-LLM1-Solar-10.8B-v1.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/T3Q-LLM-Product_-_T3Q-LLM1-Solar-10.8B-v1.0-gguf/blob/main/T3Q-LLM1-Solar-10.8B-v1.0.IQ3_M.gguf) | IQ3_M | 4.56GB | | [T3Q-LLM1-Solar-10.8B-v1.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/T3Q-LLM-Product_-_T3Q-LLM1-Solar-10.8B-v1.0-gguf/blob/main/T3Q-LLM1-Solar-10.8B-v1.0.Q3_K.gguf) | Q3_K | 4.88GB | | [T3Q-LLM1-Solar-10.8B-v1.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/T3Q-LLM-Product_-_T3Q-LLM1-Solar-10.8B-v1.0-gguf/blob/main/T3Q-LLM1-Solar-10.8B-v1.0.Q3_K_M.gguf) | Q3_K_M | 4.88GB | | [T3Q-LLM1-Solar-10.8B-v1.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/T3Q-LLM-Product_-_T3Q-LLM1-Solar-10.8B-v1.0-gguf/blob/main/T3Q-LLM1-Solar-10.8B-v1.0.Q3_K_L.gguf) | Q3_K_L | 5.31GB | | [T3Q-LLM1-Solar-10.8B-v1.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/T3Q-LLM-Product_-_T3Q-LLM1-Solar-10.8B-v1.0-gguf/blob/main/T3Q-LLM1-Solar-10.8B-v1.0.IQ4_XS.gguf) | IQ4_XS | 5.47GB | | [T3Q-LLM1-Solar-10.8B-v1.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/T3Q-LLM-Product_-_T3Q-LLM1-Solar-10.8B-v1.0-gguf/blob/main/T3Q-LLM1-Solar-10.8B-v1.0.Q4_0.gguf) | Q4_0 | 5.7GB | | [T3Q-LLM1-Solar-10.8B-v1.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/T3Q-LLM-Product_-_T3Q-LLM1-Solar-10.8B-v1.0-gguf/blob/main/T3Q-LLM1-Solar-10.8B-v1.0.IQ4_NL.gguf) | IQ4_NL | 5.77GB | | [T3Q-LLM1-Solar-10.8B-v1.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/T3Q-LLM-Product_-_T3Q-LLM1-Solar-10.8B-v1.0-gguf/blob/main/T3Q-LLM1-Solar-10.8B-v1.0.Q4_K_S.gguf) | Q4_K_S | 5.75GB | | [T3Q-LLM1-Solar-10.8B-v1.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/T3Q-LLM-Product_-_T3Q-LLM1-Solar-10.8B-v1.0-gguf/blob/main/T3Q-LLM1-Solar-10.8B-v1.0.Q4_K.gguf) | Q4_K | 6.07GB | | [T3Q-LLM1-Solar-10.8B-v1.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/T3Q-LLM-Product_-_T3Q-LLM1-Solar-10.8B-v1.0-gguf/blob/main/T3Q-LLM1-Solar-10.8B-v1.0.Q4_K_M.gguf) | Q4_K_M | 6.07GB | | [T3Q-LLM1-Solar-10.8B-v1.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/T3Q-LLM-Product_-_T3Q-LLM1-Solar-10.8B-v1.0-gguf/blob/main/T3Q-LLM1-Solar-10.8B-v1.0.Q4_1.gguf) | Q4_1 | 6.32GB | | [T3Q-LLM1-Solar-10.8B-v1.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/T3Q-LLM-Product_-_T3Q-LLM1-Solar-10.8B-v1.0-gguf/blob/main/T3Q-LLM1-Solar-10.8B-v1.0.Q5_0.gguf) | Q5_0 | 6.94GB | | [T3Q-LLM1-Solar-10.8B-v1.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/T3Q-LLM-Product_-_T3Q-LLM1-Solar-10.8B-v1.0-gguf/blob/main/T3Q-LLM1-Solar-10.8B-v1.0.Q5_K_S.gguf) | Q5_K_S | 6.94GB | | [T3Q-LLM1-Solar-10.8B-v1.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/T3Q-LLM-Product_-_T3Q-LLM1-Solar-10.8B-v1.0-gguf/blob/main/T3Q-LLM1-Solar-10.8B-v1.0.Q5_K.gguf) | Q5_K | 7.13GB | | [T3Q-LLM1-Solar-10.8B-v1.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/T3Q-LLM-Product_-_T3Q-LLM1-Solar-10.8B-v1.0-gguf/blob/main/T3Q-LLM1-Solar-10.8B-v1.0.Q5_K_M.gguf) | Q5_K_M | 7.13GB | | [T3Q-LLM1-Solar-10.8B-v1.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/T3Q-LLM-Product_-_T3Q-LLM1-Solar-10.8B-v1.0-gguf/blob/main/T3Q-LLM1-Solar-10.8B-v1.0.Q5_1.gguf) | Q5_1 | 7.56GB | | [T3Q-LLM1-Solar-10.8B-v1.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/T3Q-LLM-Product_-_T3Q-LLM1-Solar-10.8B-v1.0-gguf/blob/main/T3Q-LLM1-Solar-10.8B-v1.0.Q6_K.gguf) | Q6_K | 8.26GB | | [T3Q-LLM1-Solar-10.8B-v1.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/T3Q-LLM-Product_-_T3Q-LLM1-Solar-10.8B-v1.0-gguf/blob/main/T3Q-LLM1-Solar-10.8B-v1.0.Q8_0.gguf) | Q8_0 | 10.69GB | Original model description: --- library_name: transformers license: apache-2.0 pipeline_tag: text-generation --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f22e4076fedc4fd11e978f/MoTedec_ZL8GM2MmGyAPs.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6653cca1f72c9a37ceeef9bc/dRSvx-qGEF8lsR6srB2lM.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6653cca1f72c9a37ceeef9bc/uWGfdUrktRbGOfTyYPGQe.png)
tgrhn/whisper-large-v2-tr-cv17-2
tgrhn
2024-06-28T23:36:57Z
84
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "tr", "dataset:mozilla-foundation/common_voice_17", "base_model:openai/whisper-large-v2", "base_model:finetune:openai/whisper-large-v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-06-28T19:58:08Z
--- language: - tr license: apache-2.0 base_model: openai/whisper-large-v2 tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_17 model-index: - name: 'Whisper Large v2 TR ' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large v2 TR This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Common Voice 17 dataset. It achieves the following results on the evaluation set: - Loss: 0.1520 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 363 | 0.1495 | | 0.3301 | 2.0 | 726 | 0.1448 | | 0.0633 | 3.0 | 1089 | 0.1520 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
Xu-Ouyang/pythia-12b-deduped-int4-step71000-GPTQ-wikitext2
Xu-Ouyang
2024-06-28T23:33:58Z
76
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-06-28T23:30:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bhavsh/tinyllama-new-bhavesh
bhavsh
2024-06-28T23:19:25Z
116
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-28T23:17:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ZeroWw/gemma-2-9b-it-GGUF
ZeroWw
2024-06-28T23:16:08Z
39
1
null
[ "gguf", "en", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-28T22:57:42Z
--- license: mit language: - en --- My own (ZeroWw) quantizations. output and embed tensors quantized to f16. all other tensors quantized to q5_k or q6_k. Result: both f16.q6 and f16.q5 are smaller than q8_0 standard quantization and they perform as well as the pure f16.
liminerity/Bitnet-Mistral.0.2-v6.9
liminerity
2024-06-28T23:04:10Z
156
0
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "base_model:liminerity/Bitnet-Mistral.0.2-v6.9", "base_model:finetune:liminerity/Bitnet-Mistral.0.2-v6.9", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-28T01:17:15Z
--- base_model: liminerity/Bitnet-Mistral.0.2-v6.9 tags: - generated_from_trainer model-index: - name: Bitnet-Mistral.0.2-v6.9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Bitnet-Mistral.0.2-v6.9 This model is a fine-tuned version of [liminerity/Bitnet-Mistral.0.2-v6.9](https://huggingface.co/liminerity/Bitnet-Mistral.0.2-v6.9) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
DewEfresh/neo_7b-slerp
DewEfresh
2024-06-28T23:03:27Z
82
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "m-a-p/neo_7b", "conversational", "base_model:m-a-p/neo_7b", "base_model:finetune:m-a-p/neo_7b", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-28T23:00:23Z
--- base_model: - m-a-p/neo_7b - m-a-p/neo_7b tags: - merge - mergekit - lazymergekit - m-a-p/neo_7b --- # neo_7b-slerp neo_7b-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [m-a-p/neo_7b](https://huggingface.co/m-a-p/neo_7b) * [m-a-p/neo_7b](https://huggingface.co/m-a-p/neo_7b) ## 🧩 Configuration ```yaml slices: - sources: - model: m-a-p/neo_7b layer_range: [0, 1] - model: m-a-p/neo_7b layer_range: [1, 2] - sources: - model: m-a-p/neo_7b layer_range: [2, 3] - model: m-a-p/neo_7b layer_range: [3, 4] - sources: - model: m-a-p/neo_7b layer_range: [4, 5] - model: m-a-p/neo_7b layer_range: [5,6] - sources: - model: m-a-p/neo_7b layer_range: [6, 7] - model: m-a-p/neo_7b layer_range: [7, 8] - sources: - model: m-a-p/neo_7b layer_range: [8, 9] - model: m-a-p/neo_7b layer_range: [9, 10] - sources: - model: m-a-p/neo_7b layer_range: [10, 11] - model: m-a-p/neo_7b layer_range: [11, 12] - sources: - model: m-a-p/neo_7b layer_range: [12, 13] - model: m-a-p/neo_7b layer_range: [13, 14] - sources: - model: m-a-p/neo_7b layer_range: [14, 15] - model: m-a-p/neo_7b layer_range: [15, 16] - sources: - model: m-a-p/neo_7b layer_range: [16, 17] - model: m-a-p/neo_7b layer_range: [17, 18] - sources: - model: m-a-p/neo_7b layer_range: [18, 19] - model: m-a-p/neo_7b layer_range: [19, 20] - sources: - model: m-a-p/neo_7b layer_range: [20, 21] - model: m-a-p/neo_7b layer_range: [21, 22] - sources: - model: m-a-p/neo_7b layer_range: [22, 23] - model: m-a-p/neo_7b layer_range: [23, 24] - sources: - model: m-a-p/neo_7b layer_range: [24, 25] - model: m-a-p/neo_7b layer_range: [25, 26] - sources: - model: m-a-p/neo_7b layer_range: [26, 27] - model: m-a-p/neo_7b layer_range: [27, 28] merge_method: slerp base_model: m-a-p/neo_7b parameters: t: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "DewEfresh/neo_7b-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
jlancaster36/code_bagel_llama-3-8b-v1.1
jlancaster36
2024-06-28T22:38:45Z
8,410
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:mattshumer/Llama-3-8B-16K", "base_model:finetune:mattshumer/Llama-3-8B-16K", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-06-28T22:34:30Z
--- base_model: mattshumer/Llama-3-8B-16K language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- # Uploaded model - **Developed by:** jlancaster36 - **License:** apache-2.0 - **Finetuned from model :** mattshumer/Llama-3-8B-16K This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
BigHuggyD/cohereforai_c4ai-command-r-plus_exl2_6.0bpw_h8
BigHuggyD
2024-06-28T22:19:27Z
4
0
transformers
[ "transformers", "safetensors", "cohere", "text-generation", "conversational", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "6-bit", "exl2", "region:us" ]
text-generation
2024-06-28T21:41:25Z
--- inference: false license: cc-by-nc-4.0 library_name: transformers language: - en - fr - de - es - it - pt - ja - ko - zh - ar --- # Model Card for C4AI Command R+ 🚨 **This model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes [here](https://huggingface.co/CohereForAI/c4ai-command-r-plus-4bit)**. ## Model Summary C4AI Command R+ is an open weights research release of a 104B billion parameter model with highly advanced capabilities, this includes Retrieval Augmented Generation (RAG) and tool use to automate sophisticated tasks. The tool use in this model generation enables multi-step tool use which allows the model to combine multiple tools over multiple steps to accomplish difficult tasks. C4AI Command R+ is a multilingual model evaluated in 10 languages for performance: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Arabic, and Simplified Chinese. Command R+ is optimized for a variety of use cases including reasoning, summarization, and question answering. C4AI Command R+ is part of a family of open weight releases from Cohere For AI and Cohere. Our smaller companion model is [C4AI Command R](https://huggingface.co/CohereForAI/c4ai-command-r-v01) Developed by: [Cohere](https://cohere.com/) and [Cohere For AI](https://cohere.for.ai) - Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/) - License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy) - Model: c4ai-command-r-plus - Model Size: 104 billion parameters - Context length: 128K **Try C4AI Command R+** You can try out C4AI Command R+ before downloading the weights in our hosted [Hugging Face Space](https://huggingface.co/spaces/CohereForAI/c4ai-command-r-plus). **Usage** Please install `transformers` from the source repository that includes the necessary changes for this model. ```python # pip install 'git+https://github.com/huggingface/transformers.git' from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "CohereForAI/c4ai-command-r-plus" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) # Format message with the command-r-plus chat template messages = [{"role": "user", "content": "Hello, how are you?"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` **Quantized model through bitsandbytes, 8-bit precision** ```python # pip install 'git+https://github.com/huggingface/transformers.git' bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig bnb_config = BitsAndBytesConfig(load_in_8bit=True) model_id = "CohereForAI/c4ai-command-r-plus" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config) # Format message with the command-r-plus chat template messages = [{"role": "user", "content": "Hello, how are you?"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` **Quantized model through bitsandbytes, 4-bit precision** This model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes [here](https://huggingface.co/CohereForAI/c4ai-command-r-plus-4bit). ## Model Details **Input**: Models input text only. **Output**: Models generate text only. **Model Architecture**: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety. **Languages covered**: The model is optimized to perform well in the following languages: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic. Pre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian. **Context length**: Command R+ supports a context length of 128K. ## Evaluations Command R+ has been submitted to the [Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We include the results below, along with a direct comparison to the strongest state-of-art open weights models currently available on Hugging Face. We note that these results are only useful to compare when evaluations are implemented for all models in a [standardized way](https://github.com/EleutherAI/lm-evaluation-harness) using publically available code, and hence shouldn't be used for comparison outside of models submitted to the leaderboard or compared to self-reported numbers which can't be replicated in the same way. | Model | Average | Arc (Challenge) | Hella Swag | MMLU | Truthful QA | Winogrande | GSM8k | |:--------------------------------|----------:|------------------:|-------------:|-------:|--------------:|-------------:|--------:| | **CohereForAI/c4ai-command-r-plus** | 74.6 | 70.99 | 88.6 | 75.7 | 56.3 | 85.4 | 70.7 | | [DBRX Instruct](https://huggingface.co/databricks/dbrx-instruct) | 74.5 | 68.9 | 89 | 73.7 | 66.9 | 81.8 | 66.9 | | [Mixtral 8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 72.7 | 70.1 | 87.6 | 71.4 | 65 | 81.1 | 61.1 | | [Mixtral 8x7B Chat](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 72.6 | 70.2 | 87.6 | 71.2 | 64.6 | 81.4 | 60.7 | | [CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01) | 68.5 | 65.5 | 87 | 68.2 | 52.3 | 81.5 | 56.6 | | [Llama 2 70B](https://huggingface.co/meta-llama/Llama-2-70b-hf) | 67.9 | 67.3 | 87.3 | 69.8 | 44.9 | 83.7 | 54.1 | | [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) | 65.3 | 65.4 | 84.2 | 74.9 | 55.4 | 80.1 | 31.9 | | [Gemma-7B](https://huggingface.co/google/gemma-7b) | 63.8 | 61.1 | 82.2 | 64.6 | 44.8 | 79 | 50.9 | | [LLama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) | 62.4 | 64.6 | 85.9 | 63.9 | 52.8 | 80.5 | 26.7 | | [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 61 | 60 | 83.3 | 64.2 | 42.2 | 78.4 | 37.8 | We include these metrics here because they are frequently requested, but note that these metrics do not capture RAG, multilingual, tooling performance or the evaluation of open ended generations which we believe Command R+ to be state-of-art at. For evaluations of RAG, multilingual and tooling read more [here](https://txt.cohere.com/command-r-plus-microsoft-azure/). For evaluation of open ended generation, Command R+ is currently being evaluated on the [chatbot arena](https://chat.lmsys.org/). ### Tool use & multihop capabilities: Command R+ has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance, but we encourage experimentation. Command R+’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R+ may use one of its supplied tools more than once. The model has been trained to recognise a special `directly_answer` tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions. We recommend including the `directly_answer` tool, but it can be removed or renamed if required. Comprehensive documentation for working with command R+'s tool use prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r). The code snippet below shows a minimal working example on how to render a prompt. <details> <summary><b>Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]</b> </summary> ```python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-plus" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # Define tools available for the model to use: tools = [ { "name": "internet_search", "description": "Returns a list of relevant document snippets for a textual query retrieved from the internet", "parameter_definitions": { "query": { "description": "Query to search the internet with", "type": 'str', "required": True } } }, { 'name': "directly_answer", "description": "Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history", 'parameter_definitions': {} } ] # render the tool use prompt as a string: tool_use_prompt = tokenizer.apply_tool_use_template( conversation, tools=tools, tokenize=False, add_generation_prompt=True, ) print(tool_use_prompt) ``` </details> <details> <summary><b>Example Rendered Tool Use Prompt [CLICK TO EXPAND]</b></summary> ```` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling. ## Available Tools Here is a list of tools that you have available to you: ```python def internet_search(query: str) -> List[Dict]: """Returns a list of relevant document snippets for a textual query retrieved from the internet Args: query (str): Query to search the internet with """ pass ``` ```python def directly_answer() -> List[Dict]: """Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history """ pass ```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Write 'Action:' followed by a json-formatted list of actions that you want to perform in order to produce a good response to the user's last input. You can use any of the supplied tools any number of times, but you should aim to execute the minimum number of necessary actions for the input. You should use the `directly-answer` tool if calling the other tools is unnecessary. The list of actions you want to call should be formatted as a list of json objects, for example: ```json [ { "tool_name": title of the tool in the specification, "parameters": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters } ]```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Tool Use Completion [CLICK TO EXPAND]</b></summary> ```` Action: ```json [ { "tool_name": "internet_search", "parameters": { "query": "biggest penguin in the world" } } ] ``` ```` </details> ### Grounded Generation and RAG Capabilities: Command R+ has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG). This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation. Command R+’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured. By default, Command R+ will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as `accurate` grounded generation. The model is trained with a number of other answering modes, which can be selected by prompt changes. A `fast` citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens. Comprehensive documentation for working with Command R+'s grounded generation prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r). The code snippet below shows a minimal working example on how to render a prompt. <details> <summary> <b>Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]</b> </summary> ````python from transformers import AutoTokenizer model_id = "CohereForAI/c4ai-command-r-plus" tokenizer = AutoTokenizer.from_pretrained(model_id) # define conversation input: conversation = [ {"role": "user", "content": "Whats the biggest penguin in the world?"} ] # define documents to ground on: documents = [ { "title": "Tall penguins", "text": "Emperor penguins are the tallest growing up to 122 cm in height." }, { "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica."} ] # render the tool use prompt as a string: grounded_generation_prompt = tokenizer.apply_grounded_generation_template( conversation, documents=documents, citation_mode="accurate", # or "fast" tokenize=False, add_generation_prompt=True, ) print(grounded_generation_prompt) ```` </details> <details> <summary><b>Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]</b></summary> ````<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral. # System Preamble ## Basic Rules You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions. # User Preamble ## Task and Context You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging. ## Style Guide Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|><results> Document: 0 title: Tall penguins text: Emperor penguins are the tallest growing up to 122 cm in height. Document: 1 title: Penguin habitats text: Emperor penguins only live in Antarctica. </results><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Carefully perform the following instructions, in order, starting each with a new line. Firstly, Decide which of the retrieved documents are relevant to the user's last input by writing 'Relevant Documents:' followed by comma-separated list of document numbers. If none are relevant, you should instead write 'None'. Secondly, Decide which of the retrieved documents contain facts that should be cited in a good answer to the user's last input by writing 'Cited Documents:' followed a comma-separated list of document numbers. If you dont want to cite any of them, you should instead write 'None'. Thirdly, Write 'Answer:' followed by a response to the user's last input in high quality natural english. Use the retrieved documents to help you. Do not insert any citations or grounding markup. Finally, Write 'Grounded answer:' followed by a response to the user's last input in high quality natural english. Use the symbols <co: doc> and </co: doc> to indicate when a fact comes from a document in the search result, e.g <co: 0>my fact</co: 0> for a fact from document 0.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ```` </details> <details> <summary><b>Example Rendered Grounded Generation Completion [CLICK TO EXPAND]</b></summary> ```` Relevant Documents: 0,1 Cited Documents: 0,1 Answer: The Emperor Penguin is the tallest or biggest penguin in the world. It is a bird that lives only in Antarctica and grows to a height of around 122 centimetres. Grounded answer: The <co: 0>Emperor Penguin</co: 0> is the <co: 0>tallest</co: 0> or biggest penguin in the world. It is a bird that <co: 1>lives only in Antarctica</co: 1> and <co: 0>grows to a height of around 122 centimetres.</co: 0> ```` </details> ### Code Capabilities: Command R+ has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions. ### Model Card Contact For errors or additional questions about details in this model card, contact [info@for.ai](mailto:info@for.ai). ### Terms of Use: We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 104 billion parameter model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy). ### Try Chat: You can try Command R+ chat in the playground [here](https://dashboard.cohere.com/playground/chat). You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/c4ai-command-r-plus).
jeiku/qwen2-1
jeiku
2024-06-28T22:18:55Z
4
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "merge", "mergekit", "lazymergekit", "Weyaxi/Einstein-v7-Qwen2-7B", "jeiku/dontusethis", "conversational", "base_model:Weyaxi/Einstein-v7-Qwen2-7B", "base_model:merge:Weyaxi/Einstein-v7-Qwen2-7B", "base_model:jeiku/dontusethis", "base_model:merge:jeiku/dontusethis", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-28T22:14:06Z
--- base_model: - Weyaxi/Einstein-v7-Qwen2-7B - jeiku/dontusethis tags: - merge - mergekit - lazymergekit - Weyaxi/Einstein-v7-Qwen2-7B - jeiku/dontusethis --- # qwen2-1 qwen2-1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Weyaxi/Einstein-v7-Qwen2-7B](https://huggingface.co/Weyaxi/Einstein-v7-Qwen2-7B) * [jeiku/dontusethis](https://huggingface.co/jeiku/dontusethis) ## 🧩 Configuration ```yaml slices: - sources: - model: Weyaxi/Einstein-v7-Qwen2-7B layer_range: [0,28] - model: jeiku/dontusethis layer_range: [0,28] merge_method: slerp base_model: Weyaxi/Einstein-v7-Qwen2-7B parameters: t: - filter: self_attn value: [0, 0.3, 0.5, 0.7, 1] - filter: mlp value: [1, 0.7, 0.5, 0.3, 0] - value: 0.33 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "jeiku/qwen2-1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
martimfasantos/tinyllama-1.1b-sum-dpo-full_LR5e-8_BS64_3epochs_old
martimfasantos
2024-06-28T22:06:02Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "dataset:openai/summarize_from_feedback", "base_model:martimfasantos/tinyllama-1.1b-sum-sft-full_old", "base_model:finetune:martimfasantos/tinyllama-1.1b-sum-sft-full_old", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-27T22:43:39Z
--- license: apache-2.0 base_model: martimfasantos/tinyllama-1.1b-sum-sft-full_old tags: - alignment-handbook - trl - dpo - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - openai/summarize_from_feedback model-index: - name: tinyllama-1.1b-sum-dpo-full_LR5e-8_BS64_3epochs_old results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinyllama-1.1b-sum-dpo-full_LR5e-8_BS64_3epochs_old This model is a fine-tuned version of [martimfasantos/tinyllama-1.1b-sum-sft-full_old](https://huggingface.co/martimfasantos/tinyllama-1.1b-sum-sft-full_old) on the openai/summarize_from_feedback dataset. It achieves the following results on the evaluation set: - Loss: 0.6851 - Rewards/chosen: -0.0660 - Rewards/rejected: -0.0839 - Rewards/accuracies: 0.5978 - Rewards/margins: 0.0179 - Logps/rejected: -71.5685 - Logps/chosen: -65.3140 - Logits/rejected: -3.0328 - Logits/chosen: -3.0386 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-08 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6931 | 0.0689 | 100 | 0.6932 | -0.0000 | 0.0001 | 0.4809 | -0.0001 | -63.1742 | -58.7157 | -3.1575 | -3.1631 | | 0.6931 | 0.1378 | 200 | 0.6932 | -0.0001 | -0.0000 | 0.4735 | -0.0001 | -63.1804 | -58.7190 | -3.1577 | -3.1633 | | 0.693 | 0.2068 | 300 | 0.6931 | 0.0002 | 0.0002 | 0.5044 | 0.0000 | -63.1651 | -58.6934 | -3.1573 | -3.1630 | | 0.6929 | 0.2757 | 400 | 0.6931 | 0.0004 | 0.0004 | 0.4928 | 0.0000 | -63.1405 | -58.6678 | -3.1565 | -3.1621 | | 0.6925 | 0.3446 | 500 | 0.6930 | 0.0009 | 0.0005 | 0.5374 | 0.0004 | -63.1296 | -58.6253 | -3.1548 | -3.1605 | | 0.6919 | 0.4135 | 600 | 0.6928 | 0.0012 | 0.0006 | 0.5644 | 0.0006 | -63.1213 | -58.5903 | -3.1529 | -3.1585 | | 0.6917 | 0.4824 | 700 | 0.6926 | 0.0017 | 0.0006 | 0.5562 | 0.0011 | -63.1193 | -58.5436 | -3.1505 | -3.1562 | | 0.6905 | 0.5513 | 800 | 0.6924 | 0.0019 | 0.0003 | 0.5681 | 0.0016 | -63.1495 | -58.5180 | -3.1471 | -3.1528 | | 0.6898 | 0.6203 | 900 | 0.6920 | 0.0018 | -0.0004 | 0.5839 | 0.0023 | -63.2244 | -58.5291 | -3.1427 | -3.1484 | | 0.6894 | 0.6892 | 1000 | 0.6918 | 0.0013 | -0.0015 | 0.5699 | 0.0028 | -63.3282 | -58.5803 | -3.1380 | -3.1437 | | 0.6894 | 0.7581 | 1100 | 0.6915 | 0.0004 | -0.0030 | 0.5718 | 0.0033 | -63.4761 | -58.6734 | -3.1327 | -3.1383 | | 0.6886 | 0.8270 | 1200 | 0.6912 | -0.0007 | -0.0048 | 0.5704 | 0.0041 | -63.6618 | -58.7859 | -3.1285 | -3.1342 | | 0.6878 | 0.8959 | 1300 | 0.6907 | -0.0026 | -0.0077 | 0.5802 | 0.0051 | -63.9501 | -58.9768 | -3.1220 | -3.1276 | | 0.6872 | 0.9649 | 1400 | 0.6904 | -0.0047 | -0.0104 | 0.5869 | 0.0057 | -64.2244 | -59.1855 | -3.1181 | -3.1238 | | 0.6865 | 1.0338 | 1500 | 0.6902 | -0.0077 | -0.0140 | 0.5869 | 0.0063 | -64.5792 | -59.4787 | -3.1117 | -3.1174 | | 0.6855 | 1.1027 | 1600 | 0.6898 | -0.0109 | -0.0180 | 0.5839 | 0.0071 | -64.9847 | -59.8052 | -3.1071 | -3.1128 | | 0.6842 | 1.1716 | 1700 | 0.6895 | -0.0156 | -0.0234 | 0.5827 | 0.0079 | -65.5234 | -60.2681 | -3.1002 | -3.1059 | | 0.6842 | 1.2405 | 1800 | 0.6890 | -0.0215 | -0.0304 | 0.5876 | 0.0089 | -66.2193 | -60.8594 | -3.0947 | -3.1005 | | 0.6804 | 1.3094 | 1900 | 0.6888 | -0.0253 | -0.0347 | 0.5911 | 0.0095 | -66.6540 | -61.2379 | -3.0896 | -3.0952 | | 0.6827 | 1.3784 | 2000 | 0.6883 | -0.0299 | -0.0405 | 0.5971 | 0.0107 | -67.2341 | -61.6997 | -3.0847 | -3.0904 | | 0.6805 | 1.4473 | 2100 | 0.6879 | -0.0345 | -0.0461 | 0.5980 | 0.0116 | -67.7896 | -62.1622 | -3.0798 | -3.0855 | | 0.68 | 1.5162 | 2200 | 0.6876 | -0.0374 | -0.0495 | 0.5929 | 0.0121 | -68.1323 | -62.4511 | -3.0751 | -3.0808 | | 0.6805 | 1.5851 | 2300 | 0.6873 | -0.0420 | -0.0550 | 0.5908 | 0.0130 | -68.6762 | -62.9119 | -3.0705 | -3.0763 | | 0.6802 | 1.6540 | 2400 | 0.6870 | -0.0440 | -0.0575 | 0.5936 | 0.0135 | -68.9288 | -63.1075 | -3.0657 | -3.0714 | | 0.6788 | 1.7229 | 2500 | 0.6868 | -0.0465 | -0.0604 | 0.5950 | 0.0140 | -69.2231 | -63.3570 | -3.0616 | -3.0674 | | 0.6784 | 1.7919 | 2600 | 0.6865 | -0.0493 | -0.0639 | 0.5948 | 0.0146 | -69.5742 | -63.6419 | -3.0568 | -3.0626 | | 0.6771 | 1.8608 | 2700 | 0.6863 | -0.0524 | -0.0676 | 0.5943 | 0.0152 | -69.9422 | -63.9527 | -3.0530 | -3.0588 | | 0.676 | 1.9297 | 2800 | 0.6861 | -0.0553 | -0.0710 | 0.5892 | 0.0157 | -70.2780 | -64.2370 | -3.0501 | -3.0558 | | 0.6793 | 1.9986 | 2900 | 0.6860 | -0.0571 | -0.0731 | 0.5922 | 0.0160 | -70.4908 | -64.4251 | -3.0474 | -3.0532 | | 0.6755 | 2.0675 | 3000 | 0.6858 | -0.0592 | -0.0755 | 0.5929 | 0.0163 | -70.7265 | -64.6294 | -3.0442 | -3.0500 | | 0.678 | 2.1365 | 3100 | 0.6856 | -0.0600 | -0.0768 | 0.5941 | 0.0168 | -70.8605 | -64.7164 | -3.0422 | -3.0480 | | 0.6795 | 2.2054 | 3200 | 0.6855 | -0.0611 | -0.0781 | 0.5941 | 0.0170 | -70.9855 | -64.8209 | -3.0400 | -3.0457 | | 0.6784 | 2.2743 | 3300 | 0.6854 | -0.0619 | -0.0791 | 0.5969 | 0.0172 | -71.0930 | -64.9018 | -3.0382 | -3.0440 | | 0.6792 | 2.3432 | 3400 | 0.6853 | -0.0627 | -0.0801 | 0.5946 | 0.0175 | -71.1919 | -64.9777 | -3.0366 | -3.0423 | | 0.6769 | 2.4121 | 3500 | 0.6853 | -0.0636 | -0.0811 | 0.5953 | 0.0175 | -71.2883 | -65.0695 | -3.0356 | -3.0414 | | 0.6771 | 2.4810 | 3600 | 0.6852 | -0.0645 | -0.0822 | 0.5978 | 0.0177 | -71.3953 | -65.1583 | -3.0346 | -3.0404 | | 0.6785 | 2.5500 | 3700 | 0.6851 | -0.0650 | -0.0829 | 0.5997 | 0.0179 | -71.4696 | -65.2152 | -3.0340 | -3.0397 | | 0.6779 | 2.6189 | 3800 | 0.6851 | -0.0655 | -0.0833 | 0.5962 | 0.0179 | -71.5138 | -65.2594 | -3.0332 | -3.0390 | | 0.6775 | 2.6878 | 3900 | 0.6851 | -0.0657 | -0.0836 | 0.5974 | 0.0179 | -71.5451 | -65.2842 | -3.0331 | -3.0389 | | 0.6757 | 2.7567 | 4000 | 0.6851 | -0.0658 | -0.0837 | 0.5985 | 0.0179 | -71.5477 | -65.2925 | -3.0326 | -3.0384 | | 0.6759 | 2.8256 | 4100 | 0.6850 | -0.0658 | -0.0839 | 0.6022 | 0.0181 | -71.5705 | -65.2951 | -3.0324 | -3.0382 | | 0.6755 | 2.8946 | 4200 | 0.6852 | -0.0659 | -0.0838 | 0.5990 | 0.0178 | -71.5600 | -65.3068 | -3.0326 | -3.0384 | | 0.6803 | 2.9635 | 4300 | 0.6852 | -0.0659 | -0.0838 | 0.6006 | 0.0179 | -71.5612 | -65.3069 | -3.0327 | -3.0385 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.20.0 - Tokenizers 0.19.1
dadashzadeh/mbart-finetuned-fa-pretrained-mmad
dadashzadeh
2024-06-28T21:58:28Z
111
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "generated_from_trainer", "summarization", "fa", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2023-05-22T18:19:32Z
--- tags: - generated_from_trainer model-index: - name: mbart-finetuned-fa-pretrained-mmad results: [] pipeline_tag: summarization license: mit language: - fa --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-finetuned-fa-pretrained-mmad This model is a fine-tuned version of [eslamxm/mbart-finetuned-fa](https://huggingface.co/eslamxm/mbart-finetuned-fa) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cpu - Datasets 2.12.0 - Tokenizers 0.13.3
BenevolenceMessiah/Llama-3-Instruct-8B-SPPO-Iter3-GGUF
BenevolenceMessiah
2024-06-28T21:57:37Z
13
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "en", "dataset:openbmb/UltraFeedback", "base_model:UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3", "base_model:quantized:UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2024-06-27T23:32:43Z
--- base_model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3 datasets: - openbmb/UltraFeedback language: - en license: apache-2.0 pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- Greetings friends, Asalamu Alaikum; I am pleased to provide you with GGUF vresions of this great model! The original model is: [Llama-3-Instruct-8B-SPPO-Iter3](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3) by [UCLA-AGI/](https://huggingface.co/UCLA-AGI) --- <!-- description start --> ## Description (per [TheBloke](https://huggingface.co/TheBloke)) This repo contains GGUF format model files. These files were quantised using ggml-org/gguf-my-repo [https://huggingface.co/spaces/ggml-org/gguf-my-repo] <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF (per [TheBloke](https://huggingface.co/TheBloke)) GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> --- --- # BenevolenceMessiah/Llama-3-Instruct-8B-SPPO-Iter3-Q8_0-GGUF This model was converted to GGUF format from [`UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3`](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo BenevolenceMessiah/Llama-3-Instruct-8B-SPPO-Iter3-Q8_0-GGUF --hf-file llama-3-instruct-8b-sppo-iter3-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo BenevolenceMessiah/Llama-3-Instruct-8B-SPPO-Iter3-Q8_0-GGUF --hf-file llama-3-instruct-8b-sppo-iter3-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo BenevolenceMessiah/Llama-3-Instruct-8B-SPPO-Iter3-Q8_0-GGUF --hf-file llama-3-instruct-8b-sppo-iter3-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo BenevolenceMessiah/Llama-3-Instruct-8B-SPPO-Iter3-Q8_0-GGUF --hf-file llama-3-instruct-8b-sppo-iter3-q8_0.gguf -c 2048 ```
ekaterina-blatova-jb/model_lr1e-4_old_scheduler_with_t_max_275_all_v4
ekaterina-blatova-jb
2024-06-28T21:57:24Z
170
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-28T21:56:00Z
--- {} --- ## Evaluation results Validation loss on the whole input: 0.8525515561923385 Validation loss on completion: 0.9319955081446096
juan071/my-super-model
juan071
2024-06-28T21:43:39Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-28T21:34:37Z
--- base_model: bert-base-cased license: apache-2.0 tags: - generated_from_trainer model-index: - name: my-super-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my-super-model This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6064 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5353 | 0.5 | 5 | 1.6092 | | 1.6015 | 1.0 | 10 | 1.6064 | ### Framework versions - Transformers 4.42.3 - Pytorch 2.3.1+cpu - Datasets 2.20.0 - Tokenizers 0.19.1
tricodex/Robobo-Learning-Machines
tricodex
2024-06-28T21:40:23Z
0
0
null
[ "license:gpl-3.0", "region:us" ]
null
2024-06-07T18:55:30Z
--- license: gpl-3.0 --- Framework: https://github.com/ci-group/learning_machines_robobo/tree/master Sim: https://www.coppeliarobotics.com/ Task 0: https://huggingface.co/tricodex/Robobo-Learning-Machines/blob/main/learning_machines_robobo/examples/full_project_setup/catkin_ws/src/learning_machines/src/learning_machines/task0_g6.py Task 1: https://huggingface.co/tricodex/Robobo-Learning-Machines/blob/main/learning_machines_robobo/examples/full_project_setup/catkin_ws/src/learning_machines/src/learning_machines/task1_ppo_train.py https://huggingface.co/tricodex/Robobo-Learning-Machines/blob/main/learning_machines_robobo/examples/full_project_setup/catkin_ws/src/learning_machines/src/learning_machines/task1_robobo_con_env.py https://huggingface.co/tricodex/Robobo-Learning-Machines/blob/main/learning_machines_robobo/examples/full_project_setup/catkin_ws/src/learning_machines/src/learning_machines/task1_ppo_eval.py Task 2: https://huggingface.co/tricodex/Robobo-Learning-Machines/blob/main/learning_machines_robobo/examples/full_project_setup/catkin_ws/src/learning_machines/src/learning_machines/task2_ppo_train.py https://huggingface.co/tricodex/Robobo-Learning-Machines/blob/main/learning_machines_robobo/examples/full_project_setup/catkin_ws/src/learning_machines/src/learning_machines/task2_robobo_v1_env.py Task 2 Rework: https://huggingface.co/tricodex/Robobo-Learning-Machines/blob/main/learning_machines_robobo/examples/full_project_setup/catkin_ws/src/learning_machines/src/learning_machines/task2_robobo_env_t3rework.py Task 3: https://huggingface.co/tricodex/Robobo-Learning-Machines/blob/main/learning_machines_robobo/examples/full_project_setup/catkin_ws/src/learning_machines/src/learning_machines/task3_rob_env_irs.py
grammarly/medit-xl
grammarly
2024-06-28T21:39:52Z
0
5
transformers
[ "transformers", "text2text-generation", "en", "de", "es", "ar", "ja", "ko", "zh", "dataset:wi_locness", "dataset:matejklemen/falko_merlin", "dataset:paws", "dataset:paws-x", "dataset:asset", "arxiv:2402.16472", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
text2text-generation
2024-04-15T21:26:24Z
--- license: cc-by-nc-sa-4.0 datasets: - wi_locness - matejklemen/falko_merlin - paws - paws-x - asset language: - en - de - es - ar - ja - ko - zh metrics: - bleu - rouge - sari - accuracy library_name: transformers widget: - text: >- Umschreiben sie den satz: When I grow up, I start to understand what he said is quite right. example_title: GEC (de|en) - text: >- 문장의 간단한 버전 작성: Cuando se pueden mantener tasas de flujo comparables, los resultados son altos. example_title: Simplification (ko|es) - text: 'Paraphrase this: いちごは物語を紹介し、読者をイベントに導くと彼は言った。' example_title: Paraphrase (en|ja) pipeline_tag: text2text-generation --- # Model Card for mEdIT-xl The `medit-xl` model was obtained by fine-tuning the `MBZUAI/bactrian-x-llama-7b-lora` model on the mEdIT dataset. **Paper:** mEdIT: Multilingual Text Editing via Instruction Tuning **Authors:** Vipul Raheja, Dimitris Alikaniotis, Vivek Kulkarni, Bashar Alhafni, Dhruv Kumar ## Model Details ### Model Description - **Language(s) (NLP)**: Arabic, Chinese, English, German, Japanese, Korean, Spanish - **Finetuned from model:** `MBZUAI/bactrian-x-llama-7b-lora` ### Model Sources - **Repository:** https://github.com/vipulraheja/medit - **Paper:** https://arxiv.org/abs/2402.16472v1 ## How to use Given an edit instruction and an original text, our model can generate the edited version of the text.<br> ![task_specs](https://cdn-uploads.huggingface.co/production/uploads/60985a0547dc3dbf8a976607/816ZY2t0XPCpMMd6Z072K.png) Specifically, our models support both multi-lingual and cross-lingual text revision. Note that the input and output texts are always in the same language. The monolingual vs. cross-lingual setting is determined by comparing the language of the edit instruction in relation to the language of the input text. ### Instruction format Adherence to the following instruction format is essential; failure to do so may result in the model producing less-than-ideal results. ``` instruction_tokens = [ "Instruction", "Anweisung", ... ] input_tokens = [ "Input", "Aporte", ... ] output_tokens = [ "Output", "Produzione", ... ] task_descriptions = [ "Fix grammatical errors in this sentence", # <-- GEC task "Umschreiben Sie den Satz", # <-- Paraphrasing ... ] ``` **The entire list of possible instructions, input/output tokens, and task descriptions can be found in the Appendix of our paper.** ``` prompt_template = """### <instruction_token>:\n<task_description>\n### <input_token>:\n<input>\n### <output_token>:\n\n""" ``` Note that the tokens and the task description need not be in the language of the input (in the case of cross-lingual revision). ### Run the model **Make sure you have the following libraries installed:** ``` - peft - protobuf - sentencepiece - tokenizers - torch - transformers ``` ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "grammarly/medit-xl" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) # English GEC using Japanese instructions prompt = '### 命令:\n文章を文法的にする\n### 入力:\nI has small cat ,\n### 出力:\n\n' inputs = tokenizer(prompt, return_tensors='pt') outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) # --> I have a small cat , # German GEC using Japanese instructions prompt = '### 命令:\n文章を文法的にする\n### 入力:\nIch haben eines kleines Katze ,\n### 出力:\n\n' # ... # --> Ich habe eine kleine Katze , ``` #### Software https://github.com/vipulraheja/medit ## Citation **BibTeX:** ``` @article{raheja2023medit, title={mEdIT: mEdIT: Multilingual Text Editing via Instruction Tuning}, author={Vipul Raheja and Dimitris Alikaniotis and Vivek Kulkarni and Bashar Alhafni and Dhruv Kumar}, year={2024}, eprint={2402.16472v1}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` **APA:** Raheja, V., Alikaniotis, D., Kulkarni, V., Alhafni, B., & Kumar, D. (2024). MEdIT: Multilingual Text Editing via Instruction Tuning. ArXiv. /abs/2402.16472
alexis779/Mistral-qlora-multilex
alexis779
2024-06-28T21:39:32Z
2
0
peft
[ "peft", "safetensors", "mistral", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.3", "base_model:adapter:mistralai/Mistral-7B-v0.3", "region:us" ]
null
2024-06-19T06:29:43Z
--- base_model: mistralai/Mistral-7B-v0.3 library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
ekaterina-blatova-jb/model_lr1e-4_old_scheduler_with_t_max_275_all_v2
ekaterina-blatova-jb
2024-06-28T21:10:56Z
170
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-28T21:08:55Z
--- {} --- ## Evaluation results Validation loss on the whole input: 0.8578255325555801 Validation loss on completion: 0.9436061959131621
Edgar404/donut_tax
Edgar404
2024-06-28T21:07:48Z
33
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-text-to-text", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-06-28T18:35:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ILKT/2024-06-22_12-37-29_epoch_14
ILKT
2024-06-28T21:05:21Z
148
0
sentence-transformers
[ "sentence-transformers", "safetensors", "ILKT", "sentence-similarity", "mteb", "feature-extraction", "custom_code", "en", "pl", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-23T08:48:59Z
--- language: - en - pl model-index: - name: 2024-06-22_12-37-29_epoch_14 results: - dataset: config: default name: MTEB AllegroReviews revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6 split: test type: PL-MTEB/allegro-reviews metrics: - type: accuracy value: 22.04771371769384 - type: f1 value: 20.724204994614485 task: type: Classification - dataset: config: default name: MTEB CBD revision: 36ddb419bcffe6a5374c3891957912892916f28d split: test type: PL-MTEB/cbd metrics: - type: accuracy value: 51.49999999999999 - type: ap value: 14.265625730646667 - type: f1 value: 43.38565832555766 task: type: Classification - dataset: config: default name: MTEB CDSC-E revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d split: test type: PL-MTEB/cdsce-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB CDSC-R revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd split: test type: PL-MTEB/cdscr-sts metrics: [] task: type: STS - dataset: config: default name: MTEB EightTagsClustering revision: 78b962b130c6690659c65abf67bf1c2f030606b6 split: test type: PL-MTEB/8tags-clustering metrics: - type: v_measure value: 2.1923022731374213 - type: v_measure_std value: 0.35367125475797656 task: type: Clustering - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 21.872898453261598 - type: f1 value: 19.83619438998234 task: type: Classification - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: validation type: mteb/amazon_massive_intent metrics: - type: accuracy value: 20.90506640432858 - type: f1 value: 18.92711069074263 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 29.256893073301953 - type: f1 value: 25.74578591804067 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: validation type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 28.012788981800295 - type: f1 value: 25.46088957992473 task: type: Classification - dataset: config: default name: MTEB PAC revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 split: test type: laugustyniak/abusive-clauses-pl metrics: - type: accuracy value: 60.886185925282355 - type: ap value: 74.42846609840468 - type: f1 value: 59.2798285713518 task: type: Classification - dataset: config: default name: MTEB PSC revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669 split: test type: PL-MTEB/psc-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB PlscClusteringP2P revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b split: test type: PL-MTEB/plsc-clustering-p2p metrics: - type: v_measure value: 22.47863238868196 task: type: Clustering - dataset: config: default name: MTEB PlscClusteringS2S revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a split: test type: PL-MTEB/plsc-clustering-s2s metrics: - type: v_measure value: 20.56367396969172 task: type: Clustering - dataset: config: default name: MTEB PolEmo2.0-IN revision: d90724373c70959f17d2331ad51fb60c71176b03 split: test type: PL-MTEB/polemo2_in metrics: - type: accuracy value: 34.6814404432133 - type: f1 value: 33.84774598794685 task: type: Classification - dataset: config: default name: MTEB PolEmo2.0-OUT revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4 split: test type: PL-MTEB/polemo2_out metrics: - type: accuracy value: 19.37246963562753 - type: f1 value: 17.143257555632257 task: type: Classification - dataset: config: default name: MTEB SICK-E-PL revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9 split: test type: PL-MTEB/sicke-pl-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB SICK-R-PL revision: fd5c2441b7eeff8676768036142af4cfa42c1339 split: test type: PL-MTEB/sickr-pl-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STS22 (pl) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: dev type: mteb/stsb_multi_mt metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: test type: mteb/stsb_multi_mt metrics: [] task: type: STS pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - mteb - feature-extraction ---
Xu-Ouyang/pythia-1.4b-deduped-int4-step36000-GPTQ-wikitext2
Xu-Ouyang
2024-06-28T21:05:16Z
78
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-06-28T21:04:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SamagraDataGov/whisper-tiny-hi2_test
SamagraDataGov
2024-06-28T21:03:25Z
117
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-06-19T20:12:33Z
--- tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-tiny-hi2_test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-hi2_test This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4940 - Wer: 59.7206 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3.75e-05 - train_batch_size: 16 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 50 - training_steps: 200 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.6766 | 1.2698 | 40 | 0.6154 | 81.4733 | | 0.3599 | 2.5397 | 80 | 0.5078 | 67.0110 | | 0.2297 | 3.8095 | 120 | 0.4940 | 59.7206 | | 0.153 | 5.0794 | 160 | 0.5193 | 62.0745 | | 0.0885 | 6.3492 | 200 | 0.5557 | 60.5843 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
ILKT/2024-06-22_12-37-29_epoch_13
ILKT
2024-06-28T21:00:00Z
140
0
sentence-transformers
[ "sentence-transformers", "safetensors", "ILKT", "sentence-similarity", "mteb", "feature-extraction", "custom_code", "en", "pl", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-23T07:15:10Z
--- language: - en - pl model-index: - name: 2024-06-22_12-37-29_epoch_13 results: - dataset: config: default name: MTEB AllegroReviews revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6 split: test type: PL-MTEB/allegro-reviews metrics: - type: accuracy value: 21.8986083499006 - type: f1 value: 20.494045643513033 task: type: Classification - dataset: config: default name: MTEB CBD revision: 36ddb419bcffe6a5374c3891957912892916f28d split: test type: PL-MTEB/cbd metrics: - type: accuracy value: 52.31000000000001 - type: ap value: 14.770391705888935 - type: f1 value: 44.30460630147276 task: type: Classification - dataset: config: default name: MTEB CDSC-E revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d split: test type: PL-MTEB/cdsce-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB CDSC-R revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd split: test type: PL-MTEB/cdscr-sts metrics: [] task: type: STS - dataset: config: default name: MTEB EightTagsClustering revision: 78b962b130c6690659c65abf67bf1c2f030606b6 split: test type: PL-MTEB/8tags-clustering metrics: - type: v_measure value: 2.3467903203835494 - type: v_measure_std value: 0.28324230950278206 task: type: Clustering - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 19.024882313382648 - type: f1 value: 17.245623534419032 task: type: Classification - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: validation type: mteb/amazon_massive_intent metrics: - type: accuracy value: 18.53910477127398 - type: f1 value: 16.574522621600035 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 26.351714862138536 - type: f1 value: 23.448065286902448 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: validation type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 25.33202164289228 - type: f1 value: 23.051659134513013 task: type: Classification - dataset: config: default name: MTEB PAC revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 split: test type: laugustyniak/abusive-clauses-pl metrics: - type: accuracy value: 61.10628439038517 - type: ap value: 75.1180568321982 - type: f1 value: 59.790264880128575 task: type: Classification - dataset: config: default name: MTEB PSC revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669 split: test type: PL-MTEB/psc-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB PlscClusteringP2P revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b split: test type: PL-MTEB/plsc-clustering-p2p metrics: - type: v_measure value: 22.646694163528075 task: type: Clustering - dataset: config: default name: MTEB PlscClusteringS2S revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a split: test type: PL-MTEB/plsc-clustering-s2s metrics: - type: v_measure value: 20.228627260533187 task: type: Clustering - dataset: config: default name: MTEB PolEmo2.0-IN revision: d90724373c70959f17d2331ad51fb60c71176b03 split: test type: PL-MTEB/polemo2_in metrics: - type: accuracy value: 33.9196675900277 - type: f1 value: 33.466154937936984 task: type: Classification - dataset: config: default name: MTEB PolEmo2.0-OUT revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4 split: test type: PL-MTEB/polemo2_out metrics: - type: accuracy value: 20.52631578947369 - type: f1 value: 18.122972557323955 task: type: Classification - dataset: config: default name: MTEB SICK-E-PL revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9 split: test type: PL-MTEB/sicke-pl-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB SICK-R-PL revision: fd5c2441b7eeff8676768036142af4cfa42c1339 split: test type: PL-MTEB/sickr-pl-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STS22 (pl) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: dev type: mteb/stsb_multi_mt metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: test type: mteb/stsb_multi_mt metrics: [] task: type: STS pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - mteb - feature-extraction ---
Omriy123/vit_epochs1_batch32_lr5e-05_size224_tiles10_seed1_q3_dropout_v2_test10
Omriy123
2024-06-28T20:55:22Z
217
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-06-28T20:51:30Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - image-classification - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit_epochs1_batch32_lr5e-05_size224_tiles10_seed1_q3_dropout_v2_test10 results: - task: name: Image Classification type: image-classification dataset: name: Dogs_vs_Cats type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.944 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_epochs1_batch32_lr5e-05_size224_tiles10_seed1_q3_dropout_v2_test10 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dogs_vs_Cats dataset. It achieves the following results on the evaluation set: - Loss: 0.2319 - Accuracy: 0.944 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0051 | 1.0 | 469 | 0.2319 | 0.944 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.19.1
ekaterina-blatova-jb/model_lr1e-4_old_scheduler_with_t_max_275_all_v1
ekaterina-blatova-jb
2024-06-28T20:47:25Z
170
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-28T20:45:39Z
--- {} --- ## Evaluation results Validation loss on the whole input: 0.8587286151014268 Validation loss on completion: 0.9442666905815713
Kedar84/phi-3-vision-v0.2
Kedar84
2024-06-28T20:25:15Z
104
0
transformers
[ "transformers", "pytorch", "phi3_v", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
2024-06-28T20:23:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ekaterina-blatova-jb/model_lr1e-4_old_scheduler_with_t_max_275_all_v0
ekaterina-blatova-jb
2024-06-28T20:24:17Z
170
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-28T20:22:31Z
--- {} --- ## Evaluation results Validation loss on the whole input: 0.8519790729042143 Validation loss on completion: 0.9668812284362502
Omriy123/vit_epochs5_batch32_lr5e-05_size224_tiles10_seed1_q3_dropout_v2_test3
Omriy123
2024-06-28T20:21:04Z
195
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-06-28T20:05:48Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - image-classification - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit_epochs5_batch32_lr5e-05_size224_tiles10_seed1_q3_dropout_v2_test3 results: - task: name: Image Classification type: image-classification dataset: name: Dogs_vs_Cats type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9410666666666667 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_epochs5_batch32_lr5e-05_size224_tiles10_seed1_q3_dropout_v2_test3 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dogs_vs_Cats dataset. It achieves the following results on the evaluation set: - Loss: 0.2836 - Accuracy: 0.9411 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0042 | 1.0 | 469 | 0.2944 | 0.9333 | | 0.0389 | 2.0 | 938 | 0.2836 | 0.9411 | | 0.0017 | 3.0 | 1407 | 0.2929 | 0.9429 | | 0.001 | 4.0 | 1876 | 0.3287 | 0.9451 | | 0.0001 | 5.0 | 2345 | 0.3298 | 0.9469 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.19.1
mradermacher/Llama-3-Instruct-8B-SimPO-ExPO-GGUF
mradermacher
2024-06-28T20:20:53Z
161
0
transformers
[ "transformers", "gguf", "en", "base_model:chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO", "base_model:quantized:chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-27T21:00:38Z
--- base_model: chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO language: - en library_name: transformers license: llama3 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-Instruct-8B-SimPO-ExPO-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/resolve/main/Llama-3-Instruct-8B-SimPO-ExPO.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/resolve/main/Llama-3-Instruct-8B-SimPO-ExPO.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/resolve/main/Llama-3-Instruct-8B-SimPO-ExPO.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/resolve/main/Llama-3-Instruct-8B-SimPO-ExPO.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/resolve/main/Llama-3-Instruct-8B-SimPO-ExPO.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/resolve/main/Llama-3-Instruct-8B-SimPO-ExPO.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/resolve/main/Llama-3-Instruct-8B-SimPO-ExPO.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/resolve/main/Llama-3-Instruct-8B-SimPO-ExPO.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/resolve/main/Llama-3-Instruct-8B-SimPO-ExPO.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/resolve/main/Llama-3-Instruct-8B-SimPO-ExPO.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/resolve/main/Llama-3-Instruct-8B-SimPO-ExPO.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/resolve/main/Llama-3-Instruct-8B-SimPO-ExPO.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/resolve/main/Llama-3-Instruct-8B-SimPO-ExPO.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/resolve/main/Llama-3-Instruct-8B-SimPO-ExPO.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-8B-SimPO-ExPO-GGUF/resolve/main/Llama-3-Instruct-8B-SimPO-ExPO.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
ILKT/2024-06-22_12-37-29_epoch_5
ILKT
2024-06-28T20:13:37Z
147
0
sentence-transformers
[ "sentence-transformers", "safetensors", "ILKT", "sentence-similarity", "mteb", "feature-extraction", "custom_code", "en", "pl", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-22T18:32:25Z
--- language: - en - pl model-index: - name: 2024-06-22_12-37-29_epoch_5 results: - dataset: config: default name: MTEB AllegroReviews revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6 split: test type: PL-MTEB/allegro-reviews metrics: - type: accuracy value: 21.361829025844926 - type: f1 value: 19.69920196782064 task: type: Classification - dataset: config: default name: MTEB CBD revision: 36ddb419bcffe6a5374c3891957912892916f28d split: test type: PL-MTEB/cbd metrics: - type: accuracy value: 48.650000000000006 - type: ap value: 14.2008038394895 - type: f1 value: 42.02710366782284 task: type: Classification - dataset: config: default name: MTEB CDSC-E revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d split: test type: PL-MTEB/cdsce-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB CDSC-R revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd split: test type: PL-MTEB/cdscr-sts metrics: [] task: type: STS - dataset: config: default name: MTEB EightTagsClustering revision: 78b962b130c6690659c65abf67bf1c2f030606b6 split: test type: PL-MTEB/8tags-clustering metrics: - type: v_measure value: 2.385614567474454 - type: v_measure_std value: 0.7709471135169303 task: type: Clustering - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 23.500336247478142 - type: f1 value: 21.243243108292056 task: type: Classification - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: validation type: mteb/amazon_massive_intent metrics: - type: accuracy value: 22.96606000983768 - type: f1 value: 20.651311500662654 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 32.50168123739072 - type: f1 value: 28.748118197831474 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: validation type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 31.087063453025088 - type: f1 value: 28.824301745580023 task: type: Classification - dataset: config: default name: MTEB PAC revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 split: test type: laugustyniak/abusive-clauses-pl metrics: - type: accuracy value: 63.14219519258616 - type: ap value: 75.68633293879387 - type: f1 value: 61.29650902564272 task: type: Classification - dataset: config: default name: MTEB PSC revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669 split: test type: PL-MTEB/psc-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB PlscClusteringP2P revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b split: test type: PL-MTEB/plsc-clustering-p2p metrics: - type: v_measure value: 22.886236405259375 task: type: Clustering - dataset: config: default name: MTEB PlscClusteringS2S revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a split: test type: PL-MTEB/plsc-clustering-s2s metrics: - type: v_measure value: 19.824200488314524 task: type: Clustering - dataset: config: default name: MTEB PolEmo2.0-IN revision: d90724373c70959f17d2331ad51fb60c71176b03 split: test type: PL-MTEB/polemo2_in metrics: - type: accuracy value: 35.3185595567867 - type: f1 value: 35.06444726355803 task: type: Classification - dataset: config: default name: MTEB PolEmo2.0-OUT revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4 split: test type: PL-MTEB/polemo2_out metrics: - type: accuracy value: 22.08502024291498 - type: f1 value: 19.002463151657718 task: type: Classification - dataset: config: default name: MTEB SICK-E-PL revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9 split: test type: PL-MTEB/sicke-pl-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB SICK-R-PL revision: fd5c2441b7eeff8676768036142af4cfa42c1339 split: test type: PL-MTEB/sickr-pl-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STS22 (pl) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: dev type: mteb/stsb_multi_mt metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: test type: mteb/stsb_multi_mt metrics: [] task: type: STS pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - mteb - feature-extraction ---
Oxtiz/onnx_rubertconv_toxic_editor
Oxtiz
2024-06-28T20:05:29Z
4
0
transformers
[ "transformers", "onnx", "bert", "token-classification", "ru", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-06-28T20:04:48Z
--- language: ru --- # Model Card for onnx_rubertconv_toxic_editor <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Я - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** ru - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CodeHima/TOSBertV2
CodeHima
2024-06-28T20:04:19Z
128
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "nlp", "TOS", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-28T18:22:10Z
--- license: mit language: - en metrics: - accuracy widget: - text: "You have the right to use CommunityConnect for its intended purpose of connecting with others, sharing content responsibly, and engaging in constructive dialogue. You are responsible for the content you post and must respect the rights and privacy of others." example_title: "Fair Clause" - text: " We reserve the right to suspend, terminate, or restrict your access to the platform at any time and for any reason, without prior notice or explanation. This includes but is not limited to violations of our community guidelines or terms of service, as determined solely by ConnectWorld." example_title: "Unfair Clause" library_name: transformers pipeline_tag: text-classification tags: - nlp - bert - TOS --- # BertTOS v2: Terms of Service Unfairness Classifier ## Model Details - **Model Name:** BertTOS v2 - **Model Type:** Fine-tuned BERT for sequence classification - **Version:** 2.0 - **Language(s):** English - **License:** [MIT] - **Developer:** [Himanshu Mohanty] ## Model Description BertTOS v2 is a fine-tuned BERT model designed to classify clauses in Terms of Service (ToS) documents based on their unfairness level. This model can help users identify potentially problematic clauses in legal documents, particularly in the context of consumer protection. ### Task The model performs multi-class classification on individual sentences or clauses, categorizing them into three levels of unfairness: 0. Clearly Fair 1. Potentially Unfair 2. Clearly Unfair ### Training Data The model was trained on the [CodeHima/TOS_Dataset](https://huggingface.co/datasets/CodeHima/TOS_Dataset) dataset, which contains annotated sentences from Terms of Service documents. Each sentence is labeled with one of the three unfairness levels. ### Model Architecture - Base Model: BERT (bert-base-uncased) - Fine-tuning: Sequence classification head - Input: Tokenized text (max length 512 tokens) - Output: Probabilities for each unfairness level ## Performance The model's performance metrics on the test set: - Accuracy: [0.8795761078998073] - F1 Score (weighted): [0.885282] - Precision (weighted): [0.883729] - Recall (weighted): [0.889157] ## Limitations - The model is trained on English language ToS documents and may not perform well on other languages or legal contexts. - Performance may vary depending on the specific wording and context of clauses. - The model should be used as a tool to assist human judgment, not as a definitive legal assessment. ## Ethical Considerations - This model is intended to help identify potentially unfair clauses, but it should not be considered as legal advice. - Users should be aware of potential biases in the training data and model predictions. - The model's output should be reviewed by legal professionals for critical applications. ## How to Use You can use this model directly with the Hugging Face `transformers` library: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch # Load model and tokenizer model_name = "YourHuggingFaceUsername/TOSBertV2" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) # Function to predict unfairness level def predict_unfairness(text): inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=512) model.eval() with torch.no_grad(): outputs = model(**inputs) probabilities = torch.softmax(outputs.logits, dim=-1).squeeze() predicted_class = torch.argmax(probabilities).item() label_mapping = {0: 'clearly_fair', 1: 'potentially_unfair', 2: 'clearly_unfair'} predicted_label = label_mapping[predicted_class] return predicted_label, probabilities.tolist() # Example usage clause = "The company reserves the right to change these terms at any time without notice." predicted_label, probabilities = predict_unfairness(clause) print(f"Predicted unfairness level: {predicted_label}") print("Probabilities:") for label, prob in zip(['clearly_fair', 'potentially_unfair', 'clearly_unfair'], probabilities): print(f"{label}: {prob:.4f}") ``` ## Training The model was trained using the following hyperparameters: - Epochs: 3 - Batch Size: 16 - Learning Rate: [ ] - Optimizer: AdamW - Weight Decay: 0.01 ## Citation If you use this model in your research, please cite: ```bibtex @misc{TOSBertV2, author = {Himanshu Mohanty}, title = {TOSBertV2: is a fine-tuned BERT model designed to classify clauses in Terms of Service}, year = {2024}, publisher = {Hugging Face}, journal = {Hugging Face Model Hub}, howpublished = {\url{https://huggingface.co/CodeHima/TOSBertV2}} }
CodeHima/TOSBert
CodeHima
2024-06-28T20:03:52Z
110
0
transformers
[ "transformers", "joblib", "safetensors", "bert", "text-classification", "tos", "terms of services", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-28T06:35:23Z
--- language: - en metrics: - accuracy widget: - text: "You have the right to use CommunityConnect for its intended purpose of connecting with others, sharing content responsibly, and engaging in constructive dialogue. You are responsible for the content you post and must respect the rights and privacy of others." example_title: "Fair Clause" - text: " We reserve the right to suspend, terminate, or restrict your access to the platform at any time and for any reason, without prior notice or explanation. This includes but is not limited to violations of our community guidelines or terms of service, as determined solely by ConnectWorld." example_title: "Unfair Clause" library_name: transformers pipeline_tag: text-classification tags: - tos - terms of services - bert --- # TOSBert **TOSBert** is a fine-tuned BERT model for sequence classification tasks. It is trained on a custom dataset for multi-label classification. ## Model Details - **Model Name**: TOSBert - **Model Architecture**: BERT - **Framework**: [Hugging Face Transformers](https://huggingface.co/transformers/) - **Model Type**: Sequence Classification (Multi-label Classification) ## Dataset The model is trained on the [online_terms_of_service](https://huggingface.co/datasets/joelniklaus/online_terms_of_service) dataset hosted on Hugging Face. This dataset consists of text sequences extracted from various online terms of service documents. Each sequence is labeled with multiple categories related to legal and privacy terms. ## Training The model was fine-tuned using the following parameters: - **Number of Epochs**: 3 - **Batch Size**: 16 (both for training and evaluation) - **Warmup Steps**: 500 - **Weight Decay**: 0.01 - **Learning Rate**: Automatically adjusted ## Usage ### Installation To use this model, you need to install the `transformers` library from Hugging Face: ```bash pip install transformers ``` ### Loading the Model You can load the model using the following code: ```python from transformers import BertForSequenceClassification, BertTokenizer model_name = "CodeHima/TOSBert" model = BertForSequenceClassification.from_pretrained(model_name) tokenizer = BertTokenizer.from_pretrained(model_name) ``` ### Inference Here is an example of how to use the model for inference: ```python from transformers import pipeline classifier = pipeline("text-classification", model=model, tokenizer=tokenizer, return_all_scores=True) text = "Your input text here" predictions = classifier(text) print(predictions) ``` ### Training Script Below is an example script used for training the model: ```python from transformers import Trainer, TrainingArguments, BertForSequenceClassification, BertTokenizer import torch from sklearn.metrics import accuracy_score, precision_recall_fscore_support # Define the model model_name = "bert-base-uncased" model = BertForSequenceClassification.from_pretrained(model_name, num_labels=3) # Define the tokenizer tokenizer = BertTokenizer.from_pretrained(model_name) # Load your dataset # train_dataset and eval_dataset should be instances of torch.utils.data.Dataset # Example: train_dataset = YourDataset(train_data) # Define training arguments training_args = TrainingArguments( output_dir='./results', num_train_epochs=3, per_device_train_batch_size=16, per_device_eval_batch_size=16, warmup_steps=500, weight_decay=0.01, logging_dir='./logs', logging_steps=10, eval_strategy="epoch" ) # Custom data collator to convert labels to floats def data_collator(features): batch = {} first = features[0] if 'label' in first and first['label'] is not None: dtype = torch.float32 batch['labels'] = torch.tensor([f['label'] for f in features], dtype=dtype) for k, v in first.items(): if k != 'label' and v is not None and not isinstance(v, str): batch[k] = torch.stack([f[k] for f in features]) return batch # Define the compute metrics function def compute_metrics(pred): labels = pred.label_ids preds = (pred.predictions > 0.5).astype(int) precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='micro') acc = accuracy_score(labels, preds) return { 'accuracy': acc, 'f1': f1, 'precision': precision, 'recall': recall } # Initialize the Trainer trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, compute_metrics=compute_metrics, data_collator=data_collator ) # Train the model trainer.train() ``` ## Evaluation To evaluate the model on the validation set, you can use the following code: ```python results = trainer.evaluate() print(results) ``` ## License This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for more details. ## Citation If you use this model in your research, please cite it as follows: ```bibtex @misc{TOSBert, author = {Himanshu Mohanty}, title = {TOSBert: Fine-tuned BERT model for multi-label classification}, year = {2024}, publisher = {Hugging Face}, journal = {Hugging Face Model Hub}, howpublished = {\url{https://huggingface.co/CodeHima/TOSBert}} } ``` ## Acknowledgements This project uses the [Hugging Face Transformers](https://huggingface.co/transformers/) library. Special thanks to the developers and contributors of this library. ```
ILKT/2024-06-22_12-37-29_epoch_3
ILKT
2024-06-28T20:01:35Z
147
0
sentence-transformers
[ "sentence-transformers", "safetensors", "ILKT", "sentence-similarity", "mteb", "feature-extraction", "custom_code", "en", "pl", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-22T15:22:02Z
--- language: - en - pl model-index: - name: 2024-06-22_12-37-29_epoch_3 results: - dataset: config: default name: MTEB AllegroReviews revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6 split: test type: PL-MTEB/allegro-reviews metrics: - type: accuracy value: 22.77335984095427 - type: f1 value: 21.633161157413287 task: type: Classification - dataset: config: default name: MTEB CBD revision: 36ddb419bcffe6a5374c3891957912892916f28d split: test type: PL-MTEB/cbd metrics: - type: accuracy value: 52.1 - type: ap value: 14.669897873185539 - type: f1 value: 44.39499194571317 task: type: Classification - dataset: config: default name: MTEB CDSC-E revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d split: test type: PL-MTEB/cdsce-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB CDSC-R revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd split: test type: PL-MTEB/cdscr-sts metrics: [] task: type: STS - dataset: config: default name: MTEB EightTagsClustering revision: 78b962b130c6690659c65abf67bf1c2f030606b6 split: test type: PL-MTEB/8tags-clustering metrics: - type: v_measure value: 2.9997457724816274 - type: v_measure_std value: 0.7798049810107266 task: type: Clustering - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 23.69872225958305 - type: f1 value: 22.329202066465307 task: type: Classification - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: validation type: mteb/amazon_massive_intent metrics: - type: accuracy value: 22.808657156910968 - type: f1 value: 21.15686015469099 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 32.39071956960323 - type: f1 value: 29.01276146175851 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: validation type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 30.10821446138711 - type: f1 value: 28.129954296299786 task: type: Classification - dataset: config: default name: MTEB PAC revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 split: test type: laugustyniak/abusive-clauses-pl metrics: - type: accuracy value: 59.25282363162467 - type: ap value: 73.7417044611021 - type: f1 value: 57.76542870147847 task: type: Classification - dataset: config: default name: MTEB PSC revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669 split: test type: PL-MTEB/psc-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB PlscClusteringP2P revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b split: test type: PL-MTEB/plsc-clustering-p2p metrics: - type: v_measure value: 23.16471941868449 task: type: Clustering - dataset: config: default name: MTEB PlscClusteringS2S revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a split: test type: PL-MTEB/plsc-clustering-s2s metrics: - type: v_measure value: 20.457634502092688 task: type: Clustering - dataset: config: default name: MTEB PolEmo2.0-IN revision: d90724373c70959f17d2331ad51fb60c71176b03 split: test type: PL-MTEB/polemo2_in metrics: - type: accuracy value: 36.952908587257625 - type: f1 value: 35.76600816919445 task: type: Classification - dataset: config: default name: MTEB PolEmo2.0-OUT revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4 split: test type: PL-MTEB/polemo2_out metrics: - type: accuracy value: 25.10121457489879 - type: f1 value: 20.318534993487354 task: type: Classification - dataset: config: default name: MTEB SICK-E-PL revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9 split: test type: PL-MTEB/sicke-pl-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB SICK-R-PL revision: fd5c2441b7eeff8676768036142af4cfa42c1339 split: test type: PL-MTEB/sickr-pl-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STS22 (pl) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: dev type: mteb/stsb_multi_mt metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: test type: mteb/stsb_multi_mt metrics: [] task: type: STS pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - mteb - feature-extraction ---
Oxtiz/onnx_rubertconv_toxic_clf
Oxtiz
2024-06-28T19:58:26Z
4
0
transformers
[ "transformers", "onnx", "bert", "text-classification", "ru", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-28T19:58:04Z
--- language: ru --- # Model Card for onnx_rubertconv_toxic_clf <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Я - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** ru - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ILKT/2024-06-22_12-37-29_epoch_2
ILKT
2024-06-28T19:56:14Z
147
0
sentence-transformers
[ "sentence-transformers", "safetensors", "ILKT", "sentence-similarity", "mteb", "feature-extraction", "custom_code", "en", "pl", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-22T13:47:03Z
--- language: - en - pl model-index: - name: 2024-06-22_12-37-29_epoch_2 results: - dataset: config: default name: MTEB AllegroReviews revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6 split: test type: PL-MTEB/allegro-reviews metrics: - type: accuracy value: 22.69383697813121 - type: f1 value: 21.362811160799673 task: type: Classification - dataset: config: default name: MTEB CBD revision: 36ddb419bcffe6a5374c3891957912892916f28d split: test type: PL-MTEB/cbd metrics: - type: accuracy value: 53.339999999999996 - type: ap value: 14.65809485813021 - type: f1 value: 45.01084326182093 task: type: Classification - dataset: config: default name: MTEB CDSC-E revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d split: test type: PL-MTEB/cdsce-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB CDSC-R revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd split: test type: PL-MTEB/cdscr-sts metrics: [] task: type: STS - dataset: config: default name: MTEB EightTagsClustering revision: 78b962b130c6690659c65abf67bf1c2f030606b6 split: test type: PL-MTEB/8tags-clustering metrics: - type: v_measure value: 3.5718919710260297 - type: v_measure_std value: 0.8345651903614992 task: type: Clustering - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 22.488231338264963 - type: f1 value: 20.826027786002005 task: type: Classification - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: validation type: mteb/amazon_massive_intent metrics: - type: accuracy value: 21.898671913428434 - type: f1 value: 20.420902804885205 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 32.24277067921991 - type: f1 value: 28.185730890243683 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: validation type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 30.300049188391544 - type: f1 value: 27.073486016784653 task: type: Classification - dataset: config: default name: MTEB PAC revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 split: test type: laugustyniak/abusive-clauses-pl metrics: - type: accuracy value: 59.60903562119897 - type: ap value: 74.9405933784915 - type: f1 value: 58.672915231497 task: type: Classification - dataset: config: default name: MTEB PSC revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669 split: test type: PL-MTEB/psc-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB PlscClusteringP2P revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b split: test type: PL-MTEB/plsc-clustering-p2p metrics: - type: v_measure value: 24.760685179550322 task: type: Clustering - dataset: config: default name: MTEB PlscClusteringS2S revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a split: test type: PL-MTEB/plsc-clustering-s2s metrics: - type: v_measure value: 23.94393594955405 task: type: Clustering - dataset: config: default name: MTEB PolEmo2.0-IN revision: d90724373c70959f17d2331ad51fb60c71176b03 split: test type: PL-MTEB/polemo2_in metrics: - type: accuracy value: 38.91966759002769 - type: f1 value: 38.11020162091945 task: type: Classification - dataset: config: default name: MTEB PolEmo2.0-OUT revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4 split: test type: PL-MTEB/polemo2_out metrics: - type: accuracy value: 24.655870445344128 - type: f1 value: 20.860413679518636 task: type: Classification - dataset: config: default name: MTEB SICK-E-PL revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9 split: test type: PL-MTEB/sicke-pl-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB SICK-R-PL revision: fd5c2441b7eeff8676768036142af4cfa42c1339 split: test type: PL-MTEB/sickr-pl-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STS22 (pl) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: dev type: mteb/stsb_multi_mt metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: test type: mteb/stsb_multi_mt metrics: [] task: type: STS pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - mteb - feature-extraction ---
koeng/output
koeng
2024-06-28T19:54:45Z
116
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-28T19:25:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ASR-UWC/whisper-small-hi
ASR-UWC
2024-06-28T19:52:24Z
91
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "hi", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-06-28T12:41:21Z
--- language: - hi license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Small Hi - Sanchit Gandhi results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 config: hi split: None args: 'config: hi, split: test' metrics: - name: Wer type: wer value: 32.76475069838314 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Hi - Sanchit Gandhi This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.4416 - Wer: 32.7648 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.0919 | 2.4450 | 1000 | 0.2982 | 35.1308 | | 0.0209 | 4.8900 | 2000 | 0.3554 | 34.1023 | | 0.001 | 7.3350 | 3000 | 0.4183 | 32.8706 | | 0.0005 | 9.7800 | 4000 | 0.4416 | 32.7648 | ### Framework versions - Transformers 4.42.2 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
gelukuMLG/Llama-3-Cat-Instruct-15B-GGUF
gelukuMLG
2024-06-28T19:49:29Z
22
1
null
[ "gguf", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-18T10:32:27Z
--- license: llama3 --- ### Compute for this merge was provided by KoboldAI. ### Important: Because this model is based on Cat-8B-Instruct-V1 it has the stop sequence issues. Make sure to add `</s>` as a stop Sequence in whatever backend or ui you are using. ### The following models were used in this recipe: - https://huggingface.co/elinas/Llama-3-15B-Instruct-zeroed-ft - https://huggingface.co/elinas/Llama-3-15B-Instruct-zeroed - https://huggingface.co/TheSkullery/llama-3-cat-8b-instruct-v1 Recipe used: ``` merge_method: passthrough dtype: bfloat16 vocab_type: bpe slices: - sources: - layer_range: [0, 24] model: TheSkullery/llama-3-cat-8b-instruct-v1 - sources: - layer_range: [8, 24] model: TheSkullery/llama-3-cat-8b-instruct-v1 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [8, 24] model: TheSkullery/llama-3-cat-8b-instruct-v1 parameters: scale: - filter: o_proj value: 0.0 - filter: down_proj value: 0.0 - value: 1.0 - sources: - layer_range: [24, 32] model: TheSkullery/llama-3-cat-8b-instruct-v1 name: LLaMa-3-Cat-Instruct-Unhealed-15B --- merge_method: task_arithmetic dtype: bfloat16 vocab_type: bpe base_model: elinas/Llama-3-15B-Instruct-zeroed models: - model: elinas/Llama-3-15B-Instruct-zeroed-ft parameters: weight: 1.0 - model: LLaMa-3-Cat-Instruct-Unhealed-15B parameters: weight: 1.0 ```
not-lain/mayo
not-lain
2024-06-28T19:45:23Z
116
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-28T19:40:16Z
--- license: apache-2.0 base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 tags: - trl - sft - generated_from_trainer model-index: - name: mayo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mayo This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 5 ### Training results ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
shossain/gemma-test-shah
shossain
2024-06-28T19:42:50Z
5
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-28T18:59:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yuriachermann/My_AGI_llama_2_7B
yuriachermann
2024-06-28T19:37:32Z
3
2
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "Dolly", "ipex", "Max Series GPU", "question-answering", "en", "dataset:databricks/databricks-dolly-15k", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
question-answering
2024-06-03T14:30:50Z
--- license: llama2 library_name: peft tags: - trl - sft - generated_from_trainer - Dolly - ipex - Max Series GPU base_model: meta-llama/Llama-2-7b-hf datasets: - databricks/databricks-dolly-15k model-index: - name: My_AGI_llama_2_7B results: [] language: - en metrics: - accuracy - bertscore - bleu pipeline_tag: question-answering --- # My_AGI_llama_2_7B **Model Type:** Fine-Tuned **Model Base:** [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) **Datasets Used:** [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) **Author:** [Yuri Achermann](https://huggingface.co/yuriachermann) **Date:** June 03, 2024 ------------------------- ## Training procedure ### Training Hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - training_steps: 593 ### Framework versions - PEFT==0.11.1 - Transformers==4.41.2 - Pytorch==2.1.0.post0+cxx11.abi - Datasets==2.19.2 - Tokenizers==0.19.1 ------------------------- ## Intended uses & limitations **Primary Use Case:** The model is intended for generating human-like responses in conversational applications, like chatbots or virtual assistants. **Limitations:** The model may generate inaccurate or biased content as it reflects the data it was trained on. It is essential to evaluate the generated responses in context and use the model responsibly. ------------------------- ## Evaluation The evaluation platform consists of Gaudi Accelerators and Xeon CPUs running benchmarks from the [Eleuther AI Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | |:-------:|:-----:|:---------:|:-----:|:----------:|:----------:| | 54.904 | 45.65 | 76.8 | 42.02 | 40.2 | 69.85 | ------------------------- ## Ethical Considerations The model may inherit biases present in the training data. It is crucial to use the model in a way that promotes fairness and mitigates potential biases. ------------------------- ## Acknowledgments This fine-tuning effort was made possible by the support of Intel, that provided the computing resources, and [Eduardo Alvarez](https://huggingface.co/eduardo-alvarez). Additional shout-out to the creators of the Llama-2-7b-hf model and the contributors to the databricks-dolly-15k dataset. ------------------------- ## Contact Information For questions or feedback about this model, please contact **[Yuri Achermann](mailto:yuri.achermann@gmail.com)**. ------------------------- ## License This model is distributed under **Apache 2.0 License**.
Ananthu357/Ananthus-BAAI-for-contracts5.0
Ananthu357
2024-06-28T19:34:29Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:453", "loss:CosineSimilarityLoss", "arxiv:1908.10084", "base_model:BAAI/bge-large-en", "base_model:finetune:BAAI/bge-large-en", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-28T19:33:12Z
--- base_model: BAAI/bge-large-en datasets: [] language: [] library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:453 - loss:CosineSimilarityLoss widget: - source_sentence: Termination notice sentences: - "having value more than Rs 20 crore and original period of completion 12 months\ \ or more, when there is no reduction in original scope of work by more than 10%,\ \ and no extension granted on either railway or Contractor\x92s account," - Special Conditions might exist in the contract and supersede the Standard General Conditions. - Subject to the provisions of the aforesaid Arbitration and Conciliation Act 1996 and the rules thereunder and relevant para of General Conditions of Contract - source_sentence: Impact of breach of terms by subcontracting. sentences: - The contractor shall commence the works within 15 days after the receipt by him of an order in wirting to this effect from the Engineer and shall proceed with the same with due expection and without delay. - Railway may, if satisfied that the works can be completed by the Contractor within reasonable short time thereafter, allow the Contractor for further extension of time (Proforma at Annexure-VII) as the Engineer may decide - On first occasion of noticing exaggerated/ false measurement, Engineer shall recover liquidated damages equal to 10% of claimed gross bill value. - source_sentence: 'Place of Arbitration: The place of arbitration would be within the geographical limits of the Division of the Railway' sentences: - the Railway may grant such extension or extensions of the completion date as may be considered reasonable. - Location for dispute resolution - Any item of work carried out by the Contractor on the instructions of the Engineer which is not included in the accepted Schedules of Rates shall be executed at the rates set forth in the Schedule of Rates of Railway. - source_sentence:         Special Conditions of Contract must be referred to while executing the contract sentences: - a penal interest of 12% per annum shall be charged for the delay beyond 21(Twenty one) days, i.e. from 22nd day after the date of issue of LOA. Further, if the 60th day happens to be a declared holiday in the concerned office of the Railway, submission of PG can be accepted on the next working day. -         Contractor should finish the works according to Special conditions of Contract. - This explains the impact of breaching terms in subcontracting part. - source_sentence: Additional documents involve General Conditions of Contract, Regulations for Tenders and Contracts and Special Conditions of Contract. sentences: - "At the final stage of completion and commissioning of work, in case the contractor\x92\ s failure is limited to only some of the works costing not more than 2% of the\ \ original contract value," -         Any material found during excavation should be reported to the engineer. -  If the Contractor shall be dissatisfied by reason of any decision of the Engineer's representative, he shall be entitled to refer the matter to the Engineer who shall there upon confirm or vary such decision. --- # SentenceTransformer based on BAAI/bge-large-en This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) <!-- at revision abe7d9d814b775ca171121fb03f394dc42974275 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 1024 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Ananthu357/Ananthus-BAAI-for-contracts5.0") # Run inference sentences = [ 'Additional documents involve General Conditions of Contract, Regulations for Tenders and Contracts and Special Conditions of Contract.', "\xa0If the Contractor shall be dissatisfied by reason of any decision of the Engineer's representative, he shall be entitled to refer the matter to the Engineer who shall there upon confirm or vary such decision.", 'At the final stage of completion and commissioning of work, in case the contractor\x92s failure is limited to only some of the works costing not more than 2% of the original contract value,', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 25 - `warmup_ratio`: 0.1 - `fp16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 25 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | |:-------:|:----:|:-------------:|:------:| | 3.3448 | 100 | 0.06 | 0.0540 | | 6.6897 | 200 | 0.0084 | 0.0568 | | 10.0345 | 300 | 0.0035 | 0.0548 | | 13.3448 | 400 | 0.0018 | 0.0536 | | 16.6897 | 500 | 0.0011 | 0.0548 | | 20.0345 | 600 | 0.001 | 0.0553 | | 23.3448 | 700 | 0.0009 | 0.0556 | | 3.3448 | 100 | 0.0014 | 0.0578 | | 6.6897 | 200 | 0.0038 | 0.0582 | | 10.0345 | 300 | 0.0025 | 0.0623 | | 13.3448 | 400 | 0.0014 | 0.0579 | | 16.6897 | 500 | 0.0008 | 0.0582 | | 20.0345 | 600 | 0.0006 | 0.0579 | | 23.3448 | 700 | 0.0006 | 0.0585 | | 3.3448 | 100 | 0.0029 | 0.0640 | | 6.6897 | 200 | 0.0048 | 0.0561 | | 10.0345 | 300 | 0.0018 | 0.0524 | | 13.3448 | 400 | 0.001 | 0.0522 | | 16.6897 | 500 | 0.0007 | 0.0514 | | 20.0345 | 600 | 0.0005 | 0.0519 | | 23.3448 | 700 | 0.0005 | 0.0522 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.41.2 - PyTorch: 2.3.0+cu121 - Accelerate: 0.31.0 - Datasets: 2.20.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
benmajor27/whisper-large-v3-hu_full
benmajor27
2024-06-28T19:29:17Z
135
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "hu", "dataset:mozilla-foundation/common_voice_17_0", "base_model:openai/whisper-large-v3", "base_model:finetune:openai/whisper-large-v3", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-06-28T08:11:09Z
--- base_model: openai/whisper-large-v3 datasets: - mozilla-foundation/common_voice_17_0 language: - hu license: apache-2.0 metrics: - wer tags: - generated_from_trainer model-index: - name: Whisper Large V3 HU Full - snoopyben27 results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: Common Voice 17.0 type: mozilla-foundation/common_voice_17_0 config: default split: test args: 'config: hu, split: test' metrics: - type: wer value: 8.860932585806099 name: Wer --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large V3 HU Full - snoopyben27 This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the Common Voice 17.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.0911 - Wer: 8.8609 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 7000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.1301 | 0.3299 | 1000 | 0.1351 | 14.5084 | | 0.1324 | 0.6598 | 2000 | 0.1208 | 13.2777 | | 0.1136 | 0.9898 | 3000 | 0.1066 | 11.5548 | | 0.0471 | 1.3197 | 4000 | 0.1030 | 10.3788 | | 0.0337 | 1.6496 | 5000 | 0.0955 | 9.8045 | | 0.0311 | 1.9795 | 6000 | 0.0875 | 9.2438 | | 0.0108 | 2.3095 | 7000 | 0.0911 | 8.8609 | ### Framework versions - Transformers 4.42.2 - Pytorch 2.3.1 - Datasets 2.20.0 - Tokenizers 0.19.1
panxinyang/Qwen-Qwen1.5-1.8B-1719602865
panxinyang
2024-06-28T19:27:48Z
4
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
2024-06-28T19:27:45Z
--- base_model: Qwen/Qwen1.5-1.8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
sert121/llama_8b_adapters
sert121
2024-06-28T19:25:02Z
5
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:defog/llama-3-sqlcoder-8b", "base_model:adapter:defog/llama-3-sqlcoder-8b", "region:us" ]
null
2024-06-28T19:14:52Z
--- base_model: defog/llama-3-sqlcoder-8b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
Ziray/model_4_bit
Ziray
2024-06-28T19:24:31Z
78
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-06-28T19:12:32Z
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- # Uploaded model - **Developed by:** Ziray - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ILKT/2024-06-24_00-11-56_epoch_6
ILKT
2024-06-28T19:19:44Z
145
0
sentence-transformers
[ "sentence-transformers", "safetensors", "ILKT", "sentence-similarity", "mteb", "feature-extraction", "custom_code", "en", "pl", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-24T06:26:21Z
--- language: - en - pl model-index: - name: 2024-06-24_00-11-56_epoch_6 results: - dataset: config: default name: MTEB AllegroReviews revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6 split: test type: PL-MTEB/allegro-reviews metrics: - type: accuracy value: 22.55467196819086 - type: f1 value: 19.19828737718257 task: type: Classification - dataset: config: default name: MTEB CBD revision: 36ddb419bcffe6a5374c3891957912892916f28d split: test type: PL-MTEB/cbd metrics: - type: accuracy value: 51.33 - type: ap value: 14.015570996047844 - type: f1 value: 43.23138599880047 task: type: Classification - dataset: config: default name: MTEB CDSC-E revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d split: test type: PL-MTEB/cdsce-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB CDSC-R revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd split: test type: PL-MTEB/cdscr-sts metrics: [] task: type: STS - dataset: config: default name: MTEB EightTagsClustering revision: 78b962b130c6690659c65abf67bf1c2f030606b6 split: test type: PL-MTEB/8tags-clustering metrics: - type: v_measure value: 2.8799530368800563 - type: v_measure_std value: 0.256649729248376 task: type: Clustering - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 18.54068594485541 - type: f1 value: 16.29550022168886 task: type: Classification - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: validation type: mteb/amazon_massive_intent metrics: - type: accuracy value: 17.466797835710775 - type: f1 value: 14.980594904804006 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 26.36852723604573 - type: f1 value: 23.092371479862656 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: validation type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 25.238563698967038 - type: f1 value: 22.644235035013953 task: type: Classification - dataset: config: default name: MTEB PAC revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 split: test type: laugustyniak/abusive-clauses-pl metrics: - type: accuracy value: 55.25629887054735 - type: ap value: 69.0709380197225 - type: f1 value: 51.947600759547306 task: type: Classification - dataset: config: default name: MTEB PSC revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669 split: test type: PL-MTEB/psc-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB PlscClusteringP2P revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b split: test type: PL-MTEB/plsc-clustering-p2p metrics: - type: v_measure value: 27.239201733982988 task: type: Clustering - dataset: config: default name: MTEB PlscClusteringS2S revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a split: test type: PL-MTEB/plsc-clustering-s2s metrics: - type: v_measure value: 23.144006919275732 task: type: Clustering - dataset: config: default name: MTEB PolEmo2.0-IN revision: d90724373c70959f17d2331ad51fb60c71176b03 split: test type: PL-MTEB/polemo2_in metrics: - type: accuracy value: 37.22991689750693 - type: f1 value: 36.92338407309846 task: type: Classification - dataset: config: default name: MTEB PolEmo2.0-OUT revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4 split: test type: PL-MTEB/polemo2_out metrics: - type: accuracy value: 23.60323886639676 - type: f1 value: 19.171138843465414 task: type: Classification - dataset: config: default name: MTEB SICK-E-PL revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9 split: test type: PL-MTEB/sicke-pl-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB SICK-R-PL revision: fd5c2441b7eeff8676768036142af4cfa42c1339 split: test type: PL-MTEB/sickr-pl-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STS22 (pl) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: dev type: mteb/stsb_multi_mt metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: test type: mteb/stsb_multi_mt metrics: [] task: type: STS pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - mteb - feature-extraction ---
Azazelle/Llama-3-Nerdy-RP-8B
Azazelle
2024-06-28T19:19:15Z
7
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "arxiv:2403.19522", "base_model:Azazelle/ANJIR-ADAPTER-128", "base_model:merge:Azazelle/ANJIR-ADAPTER-128", "base_model:Azazelle/Aura_Llama3", "base_model:merge:Azazelle/Aura_Llama3", "base_model:Azazelle/BlueMoon_Llama3", "base_model:merge:Azazelle/BlueMoon_Llama3", "base_model:Azazelle/Llama-3-8B-Abomination-LORA", "base_model:merge:Azazelle/Llama-3-8B-Abomination-LORA", "base_model:Azazelle/Llama-3-Instruct-LiPPA-LoRA-8B", "base_model:merge:Azazelle/Llama-3-Instruct-LiPPA-LoRA-8B", "base_model:Azazelle/Llama-3-LimaRP-Instruct-LoRA-8B", "base_model:merge:Azazelle/Llama-3-LimaRP-Instruct-LoRA-8B", "base_model:Azazelle/Llama-3-LongStory-LORA", "base_model:merge:Azazelle/Llama-3-LongStory-LORA", "base_model:Azazelle/Llama3_RP_ORPO_LoRA", "base_model:merge:Azazelle/Llama3_RP_ORPO_LoRA", "base_model:Azazelle/Luna_Llama3", "base_model:merge:Azazelle/Luna_Llama3", "base_model:Azazelle/Nimue-8B", "base_model:merge:Azazelle/Nimue-8B", "base_model:Azazelle/RP_Format_QuoteAsterisk_Llama3", "base_model:merge:Azazelle/RP_Format_QuoteAsterisk_Llama3", "base_model:Azazelle/Smarts_Llama3", "base_model:merge:Azazelle/Smarts_Llama3", "base_model:Azazelle/Theory_of_Mind_Llama3", "base_model:merge:Azazelle/Theory_of_Mind_Llama3", "base_model:Azazelle/llama3-8b-hikikomori-v0.4", "base_model:merge:Azazelle/llama3-8b-hikikomori-v0.4", "base_model:ToastyPigeon/Llama-3-8B-Instruct-SpringDragon-V2-QLoRA", "base_model:merge:ToastyPigeon/Llama-3-8B-Instruct-SpringDragon-V2-QLoRA", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-28T19:05:36Z
--- base_model: - ToastyPigeon/Llama-3-8B-Instruct-SpringDragon-V2-QLoRA - Azazelle/Aura_Llama3 - Azazelle/llama3-8b-hikikomori-v0.4 - Azazelle/RP_Format_QuoteAsterisk_Llama3 - Azazelle/Theory_of_Mind_Llama3 - Azazelle/Llama-3-Instruct-LiPPA-LoRA-8B - Azazelle/ANJIR-ADAPTER-128 - Azazelle/Llama3_RP_ORPO_LoRA - Azazelle/Smarts_Llama3 - Azazelle/BlueMoon_Llama3 - Azazelle/Llama-3-LimaRP-Instruct-LoRA-8B - Azazelle/Nimue-8B - Azazelle/Luna_Llama3 - Azazelle/Llama-3-LongStory-LORA - Azazelle/Llama-3-8B-Abomination-LORA library_name: transformers tags: - mergekit - merge --- # nerdy_rp This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using output/stop_it_nerd as a base. ### Models Merged The following models were included in the merge: * output/stop_it_nerd + [ToastyPigeon/Llama-3-8B-Instruct-SpringDragon-V2-QLoRA](https://huggingface.co/ToastyPigeon/Llama-3-8B-Instruct-SpringDragon-V2-QLoRA) * output/stop_it_nerd + [Azazelle/Aura_Llama3](https://huggingface.co/Azazelle/Aura_Llama3) * output/stop_it_nerd + [Azazelle/llama3-8b-hikikomori-v0.4](https://huggingface.co/Azazelle/llama3-8b-hikikomori-v0.4) * output/stop_it_nerd + [Azazelle/RP_Format_QuoteAsterisk_Llama3](https://huggingface.co/Azazelle/RP_Format_QuoteAsterisk_Llama3) * output/stop_it_nerd + [Azazelle/Theory_of_Mind_Llama3](https://huggingface.co/Azazelle/Theory_of_Mind_Llama3) * output/stop_it_nerd + [Azazelle/Llama-3-Instruct-LiPPA-LoRA-8B](https://huggingface.co/Azazelle/Llama-3-Instruct-LiPPA-LoRA-8B) * output/stop_it_nerd + [Azazelle/ANJIR-ADAPTER-128](https://huggingface.co/Azazelle/ANJIR-ADAPTER-128) * output/stop_it_nerd + [Azazelle/Llama3_RP_ORPO_LoRA](https://huggingface.co/Azazelle/Llama3_RP_ORPO_LoRA) * output/stop_it_nerd + [Azazelle/Smarts_Llama3](https://huggingface.co/Azazelle/Smarts_Llama3) * output/stop_it_nerd + [Azazelle/BlueMoon_Llama3](https://huggingface.co/Azazelle/BlueMoon_Llama3) * output/stop_it_nerd + [Azazelle/Llama-3-LimaRP-Instruct-LoRA-8B](https://huggingface.co/Azazelle/Llama-3-LimaRP-Instruct-LoRA-8B) * output/stop_it_nerd + [Azazelle/Nimue-8B](https://huggingface.co/Azazelle/Nimue-8B) * output/stop_it_nerd + [Azazelle/Luna_Llama3](https://huggingface.co/Azazelle/Luna_Llama3) * output/stop_it_nerd + [Azazelle/Llama-3-LongStory-LORA](https://huggingface.co/Azazelle/Llama-3-LongStory-LORA) * output/stop_it_nerd + [Azazelle/Llama-3-8B-Abomination-LORA](https://huggingface.co/Azazelle/Llama-3-8B-Abomination-LORA) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: output/stop_it_nerd dtype: bfloat16 merge_method: model_stock slices: - sources: - layer_range: [0, 32] model: output/stop_it_nerd+Azazelle/Llama-3-8B-Abomination-LORA - layer_range: [0, 32] model: output/stop_it_nerd+Azazelle/Llama-3-LimaRP-Instruct-LoRA-8B - layer_range: [0, 32] model: output/stop_it_nerd+ToastyPigeon/Llama-3-8B-Instruct-SpringDragon-V2-QLoRA - layer_range: [0, 32] model: output/stop_it_nerd+Azazelle/Llama-3-LongStory-LORA - layer_range: [0, 32] model: output/stop_it_nerd+Azazelle/ANJIR-ADAPTER-128 - layer_range: [0, 32] model: output/stop_it_nerd+Azazelle/Llama3_RP_ORPO_LoRA - layer_range: [0, 32] model: output/stop_it_nerd+Azazelle/RP_Format_QuoteAsterisk_Llama3 - layer_range: [0, 32] model: output/stop_it_nerd+Azazelle/Theory_of_Mind_Llama3 - layer_range: [0, 32] model: output/stop_it_nerd+Azazelle/Aura_Llama3 - layer_range: [0, 32] model: output/stop_it_nerd+Azazelle/Luna_Llama3 - layer_range: [0, 32] model: output/stop_it_nerd+Azazelle/BlueMoon_Llama3 - layer_range: [0, 32] model: output/stop_it_nerd+Azazelle/Smarts_Llama3 - layer_range: [0, 32] model: output/stop_it_nerd+Azazelle/llama3-8b-hikikomori-v0.4 - layer_range: [0, 32] model: output/stop_it_nerd+Azazelle/Nimue-8B - layer_range: [0, 32] model: output/stop_it_nerd+Azazelle/Llama-3-Instruct-LiPPA-LoRA-8B - layer_range: [0, 32] model: output/stop_it_nerd ```
skratos115/DeepSeek-Coder-V2-Lite-Instruct-Q4_K_M-GGUF
skratos115
2024-06-28T19:15:06Z
25
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", "base_model:quantized:deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-28T19:14:24Z
--- base_model: deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct license: other license_name: deepseek-license license_link: LICENSE tags: - llama-cpp - gguf-my-repo --- # skratos115/DeepSeek-Coder-V2-Lite-Instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct`](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo skratos115/DeepSeek-Coder-V2-Lite-Instruct-Q4_K_M-GGUF --hf-file deepseek-coder-v2-lite-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo skratos115/DeepSeek-Coder-V2-Lite-Instruct-Q4_K_M-GGUF --hf-file deepseek-coder-v2-lite-instruct-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo skratos115/DeepSeek-Coder-V2-Lite-Instruct-Q4_K_M-GGUF --hf-file deepseek-coder-v2-lite-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo skratos115/DeepSeek-Coder-V2-Lite-Instruct-Q4_K_M-GGUF --hf-file deepseek-coder-v2-lite-instruct-q4_k_m.gguf -c 2048 ```
ILKT/2024-06-24_00-11-56_epoch_4
ILKT
2024-06-28T19:08:53Z
147
0
sentence-transformers
[ "sentence-transformers", "safetensors", "ILKT", "sentence-similarity", "mteb", "feature-extraction", "custom_code", "en", "pl", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-24T03:41:06Z
--- language: - en - pl model-index: - name: 2024-06-24_00-11-56_epoch_4 results: - dataset: config: default name: MTEB AllegroReviews revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6 split: test type: PL-MTEB/allegro-reviews metrics: - type: accuracy value: 22.86282306163022 - type: f1 value: 20.03845065500856 task: type: Classification - dataset: config: default name: MTEB CBD revision: 36ddb419bcffe6a5374c3891957912892916f28d split: test type: PL-MTEB/cbd metrics: - type: accuracy value: 53.03 - type: ap value: 14.035067729760556 - type: f1 value: 44.0135900331805 task: type: Classification - dataset: config: default name: MTEB CDSC-E revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d split: test type: PL-MTEB/cdsce-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB CDSC-R revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd split: test type: PL-MTEB/cdscr-sts metrics: [] task: type: STS - dataset: config: default name: MTEB EightTagsClustering revision: 78b962b130c6690659c65abf67bf1c2f030606b6 split: test type: PL-MTEB/8tags-clustering metrics: - type: v_measure value: 2.4557767917811435 - type: v_measure_std value: 0.253061574416667 task: type: Clustering - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 21.65097511768662 - type: f1 value: 20.17015022013295 task: type: Classification - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: validation type: mteb/amazon_massive_intent metrics: - type: accuracy value: 21.431382193802264 - type: f1 value: 19.630773057041544 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 29.287155346334902 - type: f1 value: 26.364245457170743 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: validation type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 28.00295130349238 - type: f1 value: 25.943766787902728 task: type: Classification - dataset: config: default name: MTEB PAC revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 split: test type: laugustyniak/abusive-clauses-pl metrics: - type: accuracy value: 55.60961482768608 - type: ap value: 69.1105053907167 - type: f1 value: 52.28026512092987 task: type: Classification - dataset: config: default name: MTEB PSC revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669 split: test type: PL-MTEB/psc-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB PlscClusteringP2P revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b split: test type: PL-MTEB/plsc-clustering-p2p metrics: - type: v_measure value: 27.04828167882042 task: type: Clustering - dataset: config: default name: MTEB PlscClusteringS2S revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a split: test type: PL-MTEB/plsc-clustering-s2s metrics: - type: v_measure value: 22.701215363942534 task: type: Clustering - dataset: config: default name: MTEB PolEmo2.0-IN revision: d90724373c70959f17d2331ad51fb60c71176b03 split: test type: PL-MTEB/polemo2_in metrics: - type: accuracy value: 36.13573407202216 - type: f1 value: 34.79457801402416 task: type: Classification - dataset: config: default name: MTEB PolEmo2.0-OUT revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4 split: test type: PL-MTEB/polemo2_out metrics: - type: accuracy value: 17.65182186234818 - type: f1 value: 16.40280459706257 task: type: Classification - dataset: config: default name: MTEB SICK-E-PL revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9 split: test type: PL-MTEB/sicke-pl-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB SICK-R-PL revision: fd5c2441b7eeff8676768036142af4cfa42c1339 split: test type: PL-MTEB/sickr-pl-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STS22 (pl) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: dev type: mteb/stsb_multi_mt metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: test type: mteb/stsb_multi_mt metrics: [] task: type: STS pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - mteb - feature-extraction ---
mradermacher/cosmosage-v3-GGUF
mradermacher
2024-06-28T19:07:43Z
103
0
transformers
[ "transformers", "gguf", "physics", "cosmology", "en", "dataset:teknium/OpenHermes-2.5", "base_model:Tijmen2/cosmosage-v3", "base_model:quantized:Tijmen2/cosmosage-v3", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-28T16:42:33Z
--- base_model: Tijmen2/cosmosage-v3 datasets: - teknium/OpenHermes-2.5 language: - en library_name: transformers license: mit quantized_by: mradermacher tags: - physics - cosmology --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Tijmen2/cosmosage-v3 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/cosmosage-v3-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/cosmosage-v3-GGUF/resolve/main/cosmosage-v3.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/cosmosage-v3-GGUF/resolve/main/cosmosage-v3.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/cosmosage-v3-GGUF/resolve/main/cosmosage-v3.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/cosmosage-v3-GGUF/resolve/main/cosmosage-v3.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/cosmosage-v3-GGUF/resolve/main/cosmosage-v3.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/cosmosage-v3-GGUF/resolve/main/cosmosage-v3.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/cosmosage-v3-GGUF/resolve/main/cosmosage-v3.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/cosmosage-v3-GGUF/resolve/main/cosmosage-v3.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/cosmosage-v3-GGUF/resolve/main/cosmosage-v3.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/cosmosage-v3-GGUF/resolve/main/cosmosage-v3.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/cosmosage-v3-GGUF/resolve/main/cosmosage-v3.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/cosmosage-v3-GGUF/resolve/main/cosmosage-v3.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/cosmosage-v3-GGUF/resolve/main/cosmosage-v3.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/cosmosage-v3-GGUF/resolve/main/cosmosage-v3.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/cosmosage-v3-GGUF/resolve/main/cosmosage-v3.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
neo-tax/technical-in-nature-classifier-for-projects
neo-tax
2024-06-28T19:07:14Z
97
0
setfit
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:avsolatorio/GIST-Embedding-v0", "base_model:finetune:avsolatorio/GIST-Embedding-v0", "region:us" ]
text-classification
2024-06-28T19:06:43Z
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer base_model: avsolatorio/GIST-Embedding-v0 metrics: - accuracy widget: - text: The project is focused on developing a new employee benefits package designed to attract and retain top talent. We will conduct competitive benchmarking to understand industry standards, gather employee feedback to identify desired benefits, and create a comprehensive package that includes health, wellness, and financial incentives. - text: A tire manufacturing company created a new belt to be used as part of tread cooling during the manufacturing process. Such a belt is not commercially available. - text: Covers tasks related to data quality and compliance. This includes handling data errors, updating data catalog definitions, and implementing compliance updates. The project aims to ensure the accuracy, completeness, and compliance of the company's data, thereby increasing its reliability and trustworthiness. - text: Involves the development, testing, and maintenance of the Huntress agent software. This includes fixing bugs, improving error handling, and adding new functionalities. The project ensures the agent software is reliable and effective in protecting customer systems. - text: This project involved integrating an off-the-shelf software program into the company's existing software infrastructure with the goal of improving the customer data allocation and retention processes. The design and development of the integrations required to succesfully launch the program within the company's existing software architecture required the Python programming language. This development required the performance of siginificant testing in an iterative nature by the development team because Python had never been used to integrate applications within the company's platform previously. pipeline_tag: text-classification inference: true --- # SetFit with avsolatorio/GIST-Embedding-v0 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [avsolatorio/GIST-Embedding-v0](https://huggingface.co/avsolatorio/GIST-Embedding-v0) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [avsolatorio/GIST-Embedding-v0](https://huggingface.co/avsolatorio/GIST-Embedding-v0) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | <ul><li>"A manufacturing corporation undertakes an initiative to restructure its manufacturing organization by designing an organizational structure that will improve the company's business operations"</li><li>"Centers on the production of content for the Brief product. This includes tasks related to drafting insights, creating case studies, and publishing social media posts. The project aims to provide valuable and timely information to Kharon's clients, helping them stay informed about global security topics that impact their commercial activities."</li><li>'The team is developing a comprehensive marketing strategy to increase brand awareness and customer engagement. This includes creating targeted advertising campaigns, optimizing our social media presence, and collaborating with influencers to promote our products. We will also analyze market trends and consumer behavior to refine our approach.'</li></ul> | | 1 | <ul><li>"Project focused on enhancing the website's functionality, including tasks related to optimizing search functionality and integrating new features such as bookmarks and table of contents for the web reader. The project aims to provide a seamless online experience for customers by improving the efficiency and speed of our website."</li><li>'Design and create an innovative drug delivery system for cancer treatment compatible with different types of cancer and different patient profiles while minimizing negative impacts on healthy tissues'</li><li>'Develop a new and advanced Natural Language Processing (NLP) algorithm to enhance the capabilities of virtual assistants used in various applications, such as customer service chatbots. This project involved improving the NLP algorithm to be more responsive in the area of complex natural language understanding, including context comprehension, sentiment analysis, and accurate response generation'</li></ul> | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("setfit_model_id") # Run inference preds = model("A tire manufacturing company created a new belt to be used as part of tread cooling during the manufacturing process. Such a belt is not commercially available.") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 23 | 43.5 | 54 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 8 | | 1 | 16 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (3, 3) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (0.0001, 0.0001) - head_learning_rate: 0.0001 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0167 | 1 | 0.2764 | - | | 0.8333 | 50 | 0.0014 | - | | 1.6667 | 100 | 0.0011 | - | | 2.5 | 150 | 0.0011 | - | ### Framework Versions - Python: 3.9.16 - SetFit: 1.0.3 - Sentence Transformers: 3.0.1 - Transformers: 4.39.0 - PyTorch: 2.3.1 - Datasets: 2.19.2 - Tokenizers: 0.15.2 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
Niggendar/edgAnzhc_aaaaanzhcpower
Niggendar
2024-06-28T19:06:30Z
140
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-06-28T19:02:51Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ILKT/2024-06-24_00-11-56_epoch_3
ILKT
2024-06-28T19:03:26Z
148
0
sentence-transformers
[ "sentence-transformers", "safetensors", "ILKT", "sentence-similarity", "mteb", "feature-extraction", "custom_code", "en", "pl", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-24T02:18:49Z
--- language: - en - pl model-index: - name: 2024-06-24_00-11-56_epoch_3 results: - dataset: config: default name: MTEB AllegroReviews revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6 split: test type: PL-MTEB/allegro-reviews metrics: - type: accuracy value: 22.693836978131216 - type: f1 value: 19.7785246426658 task: type: Classification - dataset: config: default name: MTEB CBD revision: 36ddb419bcffe6a5374c3891957912892916f28d split: test type: PL-MTEB/cbd metrics: - type: accuracy value: 53.059999999999995 - type: ap value: 14.572523115281122 - type: f1 value: 44.666538368681046 task: type: Classification - dataset: config: default name: MTEB CDSC-E revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d split: test type: PL-MTEB/cdsce-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB CDSC-R revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd split: test type: PL-MTEB/cdscr-sts metrics: [] task: type: STS - dataset: config: default name: MTEB EightTagsClustering revision: 78b962b130c6690659c65abf67bf1c2f030606b6 split: test type: PL-MTEB/8tags-clustering metrics: - type: v_measure value: 2.6012385482248446 - type: v_measure_std value: 0.5829268652344415 task: type: Clustering - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 20.077336919973103 - type: f1 value: 19.816169753103157 task: type: Classification - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: validation type: mteb/amazon_massive_intent metrics: - type: accuracy value: 19.660600098376783 - type: f1 value: 19.188722526724238 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 29.176193678547406 - type: f1 value: 26.376493006806534 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: validation type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 28.411214953271024 - type: f1 value: 26.216871325994344 task: type: Classification - dataset: config: default name: MTEB PAC revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 split: test type: laugustyniak/abusive-clauses-pl metrics: - type: accuracy value: 56.13669273095858 - type: ap value: 70.01898255793628 - type: f1 value: 53.68729025101361 task: type: Classification - dataset: config: default name: MTEB PSC revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669 split: test type: PL-MTEB/psc-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB PlscClusteringP2P revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b split: test type: PL-MTEB/plsc-clustering-p2p metrics: - type: v_measure value: 26.850340797882478 task: type: Clustering - dataset: config: default name: MTEB PlscClusteringS2S revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a split: test type: PL-MTEB/plsc-clustering-s2s metrics: - type: v_measure value: 23.162374198242148 task: type: Clustering - dataset: config: default name: MTEB PolEmo2.0-IN revision: d90724373c70959f17d2331ad51fb60c71176b03 split: test type: PL-MTEB/polemo2_in metrics: - type: accuracy value: 37.90858725761773 - type: f1 value: 38.70869305063897 task: type: Classification - dataset: config: default name: MTEB PolEmo2.0-OUT revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4 split: test type: PL-MTEB/polemo2_out metrics: - type: accuracy value: 25.647773279352226 - type: f1 value: 20.54090952680169 task: type: Classification - dataset: config: default name: MTEB SICK-E-PL revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9 split: test type: PL-MTEB/sicke-pl-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB SICK-R-PL revision: fd5c2441b7eeff8676768036142af4cfa42c1339 split: test type: PL-MTEB/sickr-pl-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STS22 (pl) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: dev type: mteb/stsb_multi_mt metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: test type: mteb/stsb_multi_mt metrics: [] task: type: STS pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - mteb - feature-extraction ---
ILKT/2024-06-24_00-11-56_epoch_2
ILKT
2024-06-28T18:56:46Z
143
0
sentence-transformers
[ "sentence-transformers", "safetensors", "ILKT", "sentence-similarity", "mteb", "feature-extraction", "custom_code", "en", "pl", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-24T00:56:41Z
--- language: - en - pl model-index: - name: 2024-06-24_00-11-56_epoch_2 results: - dataset: config: default name: MTEB AllegroReviews revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6 split: test type: PL-MTEB/allegro-reviews metrics: - type: accuracy value: 20.636182902584494 - type: f1 value: 18.970548449520848 task: type: Classification - dataset: config: default name: MTEB CBD revision: 36ddb419bcffe6a5374c3891957912892916f28d split: test type: PL-MTEB/cbd metrics: - type: accuracy value: 53.260000000000005 - type: ap value: 13.42046897399368 - type: f1 value: 43.45180723241649 task: type: Classification - dataset: config: default name: MTEB CDSC-E revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d split: test type: PL-MTEB/cdsce-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB CDSC-R revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd split: test type: PL-MTEB/cdscr-sts metrics: [] task: type: STS - dataset: config: default name: MTEB EightTagsClustering revision: 78b962b130c6690659c65abf67bf1c2f030606b6 split: test type: PL-MTEB/8tags-clustering metrics: - type: v_measure value: 3.1377802940926958 - type: v_measure_std value: 0.33155306924832717 task: type: Clustering - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 17.70006724949563 - type: f1 value: 16.72072580681421 task: type: Classification - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: validation type: mteb/amazon_massive_intent metrics: - type: accuracy value: 17.220855878012788 - type: f1 value: 16.107122172246818 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 26.37188971082717 - type: f1 value: 23.0257457094473 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: validation type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 24.530250860796855 - type: f1 value: 22.097320507641246 task: type: Classification - dataset: config: default name: MTEB PAC revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 split: test type: laugustyniak/abusive-clauses-pl metrics: - type: accuracy value: 53.35360556038226 - type: ap value: 69.100142615254 - type: f1 value: 51.20380111249444 task: type: Classification - dataset: config: default name: MTEB PSC revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669 split: test type: PL-MTEB/psc-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB PlscClusteringP2P revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b split: test type: PL-MTEB/plsc-clustering-p2p metrics: - type: v_measure value: 25.985522577017615 task: type: Clustering - dataset: config: default name: MTEB PlscClusteringS2S revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a split: test type: PL-MTEB/plsc-clustering-s2s metrics: - type: v_measure value: 22.75038559862368 task: type: Clustering - dataset: config: default name: MTEB PolEmo2.0-IN revision: d90724373c70959f17d2331ad51fb60c71176b03 split: test type: PL-MTEB/polemo2_in metrics: - type: accuracy value: 31.77285318559557 - type: f1 value: 31.517754047233744 task: type: Classification - dataset: config: default name: MTEB PolEmo2.0-OUT revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4 split: test type: PL-MTEB/polemo2_out metrics: - type: accuracy value: 24.51417004048583 - type: f1 value: 19.865669742797284 task: type: Classification - dataset: config: default name: MTEB SICK-E-PL revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9 split: test type: PL-MTEB/sicke-pl-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB SICK-R-PL revision: fd5c2441b7eeff8676768036142af4cfa42c1339 split: test type: PL-MTEB/sickr-pl-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STS22 (pl) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: dev type: mteb/stsb_multi_mt metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: test type: mteb/stsb_multi_mt metrics: [] task: type: STS pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - mteb - feature-extraction ---
Makkoen/whisper-large-cit-synth-do0.15-wd0-lr1e-05-1000
Makkoen
2024-06-28T18:46:51Z
13
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "en", "base_model:openai/whisper-large-v3", "base_model:finetune:openai/whisper-large-v3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-06-28T14:57:16Z
--- language: - en license: apache-2.0 base_model: openai/whisper-large-v3 tags: - generated_from_trainer metrics: - wer model-index: - name: ./whisper-large-cit-synth-do0.15-wd0-lr1e-05-1000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ./whisper-large-cit-synth-do0.15-wd0-lr1e-05-1000 This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the SF 1000 dataset. It achieves the following results on the evaluation set: - Loss: 0.4507 - Wer: 21.8713 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 300 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.6733 | 0.3556 | 20 | 0.4937 | 29.2008 | | 0.4835 | 0.7111 | 40 | 0.3850 | 24.2105 | | 0.4129 | 1.0667 | 60 | 0.3589 | 23.5088 | | 0.268 | 1.4222 | 80 | 0.3472 | 23.4698 | | 0.2525 | 1.7778 | 100 | 0.3474 | 23.0019 | | 0.1903 | 2.1333 | 120 | 0.3608 | 22.7680 | | 0.1316 | 2.4889 | 140 | 0.3730 | 22.9240 | | 0.1368 | 2.8444 | 160 | 0.3545 | 25.3801 | | 0.088 | 3.2 | 180 | 0.3879 | 23.0409 | | 0.0688 | 3.5556 | 200 | 0.4038 | 23.9376 | | 0.0672 | 3.9111 | 220 | 0.3813 | 22.1832 | | 0.0449 | 4.2667 | 240 | 0.4250 | 22.8070 | | 0.0338 | 4.6222 | 260 | 0.4314 | 22.2222 | | 0.0376 | 4.9778 | 280 | 0.4250 | 21.4425 | | 0.0183 | 5.3333 | 300 | 0.4507 | 21.8713 | ### Framework versions - Transformers 4.42.3 - Pytorch 1.13.1+cu117 - Datasets 2.20.0 - Tokenizers 0.19.1
ILKT/2024-06-24_22-31-18_epoch_75
ILKT
2024-06-28T18:45:37Z
140
0
sentence-transformers
[ "sentence-transformers", "safetensors", "ILKT", "sentence-similarity", "mteb", "feature-extraction", "custom_code", "en", "pl", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-25T20:22:20Z
--- language: - en - pl model-index: - name: 2024-06-24_22-31-18_epoch_75 results: - dataset: config: default name: MTEB AllegroReviews revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6 split: test type: PL-MTEB/allegro-reviews metrics: - type: accuracy value: 24.145129224652084 - type: f1 value: 22.31539517311173 task: type: Classification - dataset: config: default name: MTEB CBD revision: 36ddb419bcffe6a5374c3891957912892916f28d split: test type: PL-MTEB/cbd metrics: - type: accuracy value: 53.90999999999999 - type: ap value: 15.1506532658194 - type: f1 value: 45.64169846891563 task: type: Classification - dataset: config: default name: MTEB CDSC-E revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d split: test type: PL-MTEB/cdsce-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB CDSC-R revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd split: test type: PL-MTEB/cdscr-sts metrics: [] task: type: STS - dataset: config: default name: MTEB EightTagsClustering revision: 78b962b130c6690659c65abf67bf1c2f030606b6 split: test type: PL-MTEB/8tags-clustering metrics: - type: v_measure value: 7.835526691746403 - type: v_measure_std value: 1.310069183656216 task: type: Clustering - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 29.41492938802959 - type: f1 value: 26.91718168750773 task: type: Classification - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: validation type: mteb/amazon_massive_intent metrics: - type: accuracy value: 28.504672897196258 - type: f1 value: 25.757449612360034 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 37.96234028244788 - type: f1 value: 36.05116062969537 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: validation type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 36.827348745696014 - type: f1 value: 35.7078301081846 task: type: Classification - dataset: config: default name: MTEB PAC revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 split: test type: laugustyniak/abusive-clauses-pl metrics: - type: accuracy value: 64.63944396177237 - type: ap value: 73.05639300191305 - type: f1 value: 60.4690982645747 task: type: Classification - dataset: config: default name: MTEB PSC revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669 split: test type: PL-MTEB/psc-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB PlscClusteringP2P revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b split: test type: PL-MTEB/plsc-clustering-p2p metrics: - type: v_measure value: 35.910590406842516 task: type: Clustering - dataset: config: default name: MTEB PlscClusteringS2S revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a split: test type: PL-MTEB/plsc-clustering-s2s metrics: - type: v_measure value: 31.836353910360828 task: type: Clustering - dataset: config: default name: MTEB PolEmo2.0-IN revision: d90724373c70959f17d2331ad51fb60c71176b03 split: test type: PL-MTEB/polemo2_in metrics: - type: accuracy value: 49.37673130193907 - type: f1 value: 50.27096396342048 task: type: Classification - dataset: config: default name: MTEB PolEmo2.0-OUT revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4 split: test type: PL-MTEB/polemo2_out metrics: - type: accuracy value: 25.000000000000007 - type: f1 value: 20.950348494099522 task: type: Classification - dataset: config: default name: MTEB SICK-E-PL revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9 split: test type: PL-MTEB/sicke-pl-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB SICK-R-PL revision: fd5c2441b7eeff8676768036142af4cfa42c1339 split: test type: PL-MTEB/sickr-pl-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STS22 (pl) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: dev type: mteb/stsb_multi_mt metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: test type: mteb/stsb_multi_mt metrics: [] task: type: STS pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - mteb - feature-extraction ---
ILKT/2024-06-24_22-31-18_epoch_74
ILKT
2024-06-28T18:44:07Z
140
0
sentence-transformers
[ "sentence-transformers", "safetensors", "ILKT", "sentence-similarity", "mteb", "feature-extraction", "custom_code", "en", "pl", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-25T20:03:06Z
--- language: - en - pl model-index: - name: 2024-06-24_22-31-18_epoch_74 results: - dataset: config: default name: MTEB AllegroReviews revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6 split: test type: PL-MTEB/allegro-reviews metrics: - type: accuracy value: 24.67196819085487 - type: f1 value: 22.855737534484867 task: type: Classification - dataset: config: default name: MTEB CBD revision: 36ddb419bcffe6a5374c3891957912892916f28d split: test type: PL-MTEB/cbd metrics: - type: accuracy value: 53.74 - type: ap value: 15.0141225254739 - type: f1 value: 45.236014313387614 task: type: Classification - dataset: config: default name: MTEB CDSC-E revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d split: test type: PL-MTEB/cdsce-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB CDSC-R revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd split: test type: PL-MTEB/cdscr-sts metrics: [] task: type: STS - dataset: config: default name: MTEB EightTagsClustering revision: 78b962b130c6690659c65abf67bf1c2f030606b6 split: test type: PL-MTEB/8tags-clustering metrics: - type: v_measure value: 10.61107724605009 - type: v_measure_std value: 2.2605034803236417 task: type: Clustering - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 30.15131136516477 - type: f1 value: 27.89887698215734 task: type: Classification - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: validation type: mteb/amazon_massive_intent metrics: - type: accuracy value: 29.41957697983276 - type: f1 value: 27.06886156146294 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 39.84196368527236 - type: f1 value: 37.97975771209762 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: validation type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 38.89326119035908 - type: f1 value: 37.72354573779029 task: type: Classification - dataset: config: default name: MTEB PAC revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 split: test type: laugustyniak/abusive-clauses-pl metrics: - type: accuracy value: 64.47726614538082 - type: ap value: 72.9770994253305 - type: f1 value: 60.30210466691275 task: type: Classification - dataset: config: default name: MTEB PSC revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669 split: test type: PL-MTEB/psc-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB PlscClusteringP2P revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b split: test type: PL-MTEB/plsc-clustering-p2p metrics: - type: v_measure value: 36.87977871116745 task: type: Clustering - dataset: config: default name: MTEB PlscClusteringS2S revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a split: test type: PL-MTEB/plsc-clustering-s2s metrics: - type: v_measure value: 32.53310014460228 task: type: Clustering - dataset: config: default name: MTEB PolEmo2.0-IN revision: d90724373c70959f17d2331ad51fb60c71176b03 split: test type: PL-MTEB/polemo2_in metrics: - type: accuracy value: 48.1578947368421 - type: f1 value: 49.212713621346325 task: type: Classification - dataset: config: default name: MTEB PolEmo2.0-OUT revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4 split: test type: PL-MTEB/polemo2_out metrics: - type: accuracy value: 26.396761133603242 - type: f1 value: 21.867946399471073 task: type: Classification - dataset: config: default name: MTEB SICK-E-PL revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9 split: test type: PL-MTEB/sicke-pl-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB SICK-R-PL revision: fd5c2441b7eeff8676768036142af4cfa42c1339 split: test type: PL-MTEB/sickr-pl-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STS22 (pl) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: dev type: mteb/stsb_multi_mt metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: test type: mteb/stsb_multi_mt metrics: [] task: type: STS pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - mteb - feature-extraction ---
ILKT/2024-06-24_22-31-18_epoch_73
ILKT
2024-06-28T18:42:52Z
140
0
sentence-transformers
[ "sentence-transformers", "safetensors", "ILKT", "sentence-similarity", "mteb", "feature-extraction", "custom_code", "en", "pl", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-25T19:43:45Z
--- language: - en - pl model-index: - name: 2024-06-24_22-31-18_epoch_73 results: - dataset: config: default name: MTEB AllegroReviews revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6 split: test type: PL-MTEB/allegro-reviews metrics: - type: accuracy value: 23.856858846918488 - type: f1 value: 21.80615781911542 task: type: Classification - dataset: config: default name: MTEB CBD revision: 36ddb419bcffe6a5374c3891957912892916f28d split: test type: PL-MTEB/cbd metrics: - type: accuracy value: 54.059999999999995 - type: ap value: 14.833986910689228 - type: f1 value: 45.19798171686985 task: type: Classification - dataset: config: default name: MTEB CDSC-E revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d split: test type: PL-MTEB/cdsce-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB CDSC-R revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd split: test type: PL-MTEB/cdscr-sts metrics: [] task: type: STS - dataset: config: default name: MTEB EightTagsClustering revision: 78b962b130c6690659c65abf67bf1c2f030606b6 split: test type: PL-MTEB/8tags-clustering metrics: - type: v_measure value: 10.443779071971166 - type: v_measure_std value: 1.189184499084186 task: type: Clustering - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 27.37054472091459 - type: f1 value: 25.346968967159466 task: type: Classification - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: validation type: mteb/amazon_massive_intent metrics: - type: accuracy value: 26.994589276930643 - type: f1 value: 24.627788712160477 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 36.809011432414266 - type: f1 value: 35.05451356484148 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: validation type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 36.16330545991146 - type: f1 value: 34.814635777103135 task: type: Classification - dataset: config: default name: MTEB PAC revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 split: test type: laugustyniak/abusive-clauses-pl metrics: - type: accuracy value: 64.06892557196639 - type: ap value: 72.8523138548515 - type: f1 value: 59.70619041148899 task: type: Classification - dataset: config: default name: MTEB PSC revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669 split: test type: PL-MTEB/psc-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB PlscClusteringP2P revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b split: test type: PL-MTEB/plsc-clustering-p2p metrics: - type: v_measure value: 36.764013746251244 task: type: Clustering - dataset: config: default name: MTEB PlscClusteringS2S revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a split: test type: PL-MTEB/plsc-clustering-s2s metrics: - type: v_measure value: 32.38294030377355 task: type: Clustering - dataset: config: default name: MTEB PolEmo2.0-IN revision: d90724373c70959f17d2331ad51fb60c71176b03 split: test type: PL-MTEB/polemo2_in metrics: - type: accuracy value: 44.653739612188375 - type: f1 value: 46.82810070789533 task: type: Classification - dataset: config: default name: MTEB PolEmo2.0-OUT revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4 split: test type: PL-MTEB/polemo2_out metrics: - type: accuracy value: 25.020242914979757 - type: f1 value: 21.77577046612537 task: type: Classification - dataset: config: default name: MTEB SICK-E-PL revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9 split: test type: PL-MTEB/sicke-pl-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB SICK-R-PL revision: fd5c2441b7eeff8676768036142af4cfa42c1339 split: test type: PL-MTEB/sickr-pl-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STS22 (pl) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: dev type: mteb/stsb_multi_mt metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: test type: mteb/stsb_multi_mt metrics: [] task: type: STS pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - mteb - feature-extraction ---
Sahab55/Conv_text_summarization_BART
Sahab55
2024-06-28T18:42:00Z
106
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-06-28T18:40:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
migaraa/lora_phi-1_5
migaraa
2024-06-28T18:41:32Z
2
3
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "dolly", "ipex", "max series gpu", "dataset:generator", "base_model:microsoft/phi-1_5", "base_model:adapter:microsoft/phi-1_5", "license:mit", "region:us" ]
null
2024-05-31T17:24:57Z
--- license: mit library_name: peft tags: - trl - sft - generated_from_trainer - dolly - ipex - max series gpu base_model: microsoft/phi-1_5 datasets: - generator model-index: - name: lora_phi-1_5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lora_phi-1_5 This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset. It achieves the following results on the evaluation set: - Loss: 2.3998 ## Model description This is a fine-tuned version of the [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) model using Parameter Efficient Fine Tuning (PEFT) with Low Rank Adaptation (LoRA) on Intel(R) Data Center GPU Max 1100 and Intel(R) Xeon(R) Platinum 8480+ CPU . This model can be used for various text generation tasks including chatbots, content creation, and other NLP applications. ## Training Hardware This model was trained using: GPU: - Intel(R) Data Center GPU Max 1100 - CPU: Intel(R) Xeon(R) Platinum 8480+ ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - training_steps: 593 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.7756 | 0.8065 | 100 | 2.5791 | | 2.558 | 1.6129 | 200 | 2.4656 | | 2.4521 | 2.4194 | 300 | 2.4294 | | 2.4589 | 3.2258 | 400 | 2.4103 | | 2.4248 | 4.0323 | 500 | 2.3998 | ## Framework versions - PEFT 0.11.1 - Transformers 4.41.2 - Pytorch 2.1.0.post0+cxx11.abi - Datasets 2.19.1 - Tokenizers 0.19.1
ILKT/2024-06-24_22-31-18_epoch_72
ILKT
2024-06-28T18:37:05Z
140
0
sentence-transformers
[ "sentence-transformers", "safetensors", "ILKT", "sentence-similarity", "mteb", "feature-extraction", "custom_code", "en", "pl", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-25T19:24:21Z
--- language: - en - pl model-index: - name: 2024-06-24_22-31-18_epoch_72 results: - dataset: config: default name: MTEB AllegroReviews revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6 split: test type: PL-MTEB/allegro-reviews metrics: - type: accuracy value: 22.087475149105362 - type: f1 value: 19.5706382319453 task: type: Classification - dataset: config: default name: MTEB CBD revision: 36ddb419bcffe6a5374c3891957912892916f28d split: test type: PL-MTEB/cbd metrics: - type: accuracy value: 55.00000000000001 - type: ap value: 15.36881518626701 - type: f1 value: 46.06383006480823 task: type: Classification - dataset: config: default name: MTEB CDSC-E revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d split: test type: PL-MTEB/cdsce-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB CDSC-R revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd split: test type: PL-MTEB/cdscr-sts metrics: [] task: type: STS - dataset: config: default name: MTEB EightTagsClustering revision: 78b962b130c6690659c65abf67bf1c2f030606b6 split: test type: PL-MTEB/8tags-clustering metrics: - type: v_measure value: 8.405906184566783 - type: v_measure_std value: 0.8490983655584128 task: type: Clustering - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 27.895090786819104 - type: f1 value: 25.724339315547738 task: type: Classification - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: validation type: mteb/amazon_massive_intent metrics: - type: accuracy value: 27.437284800787015 - type: f1 value: 24.950468469212687 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 35.4808338937458 - type: f1 value: 33.778222802971214 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: validation type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 35.164781111657646 - type: f1 value: 33.81576557802537 task: type: Classification - dataset: config: default name: MTEB PAC revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 split: test type: laugustyniak/abusive-clauses-pl metrics: - type: accuracy value: 61.8853171155517 - type: ap value: 71.88997519583735 - type: f1 value: 58.02908285755359 task: type: Classification - dataset: config: default name: MTEB PSC revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669 split: test type: PL-MTEB/psc-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB PlscClusteringP2P revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b split: test type: PL-MTEB/plsc-clustering-p2p metrics: - type: v_measure value: 36.42437672413899 task: type: Clustering - dataset: config: default name: MTEB PlscClusteringS2S revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a split: test type: PL-MTEB/plsc-clustering-s2s metrics: - type: v_measure value: 32.50435792527687 task: type: Clustering - dataset: config: default name: MTEB PolEmo2.0-IN revision: d90724373c70959f17d2331ad51fb60c71176b03 split: test type: PL-MTEB/polemo2_in metrics: - type: accuracy value: 47.75623268698062 - type: f1 value: 48.27530003115992 task: type: Classification - dataset: config: default name: MTEB PolEmo2.0-OUT revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4 split: test type: PL-MTEB/polemo2_out metrics: - type: accuracy value: 24.433198380566797 - type: f1 value: 20.50184978405958 task: type: Classification - dataset: config: default name: MTEB SICK-E-PL revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9 split: test type: PL-MTEB/sicke-pl-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB SICK-R-PL revision: fd5c2441b7eeff8676768036142af4cfa42c1339 split: test type: PL-MTEB/sickr-pl-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STS22 (pl) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: dev type: mteb/stsb_multi_mt metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: test type: mteb/stsb_multi_mt metrics: [] task: type: STS pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - mteb - feature-extraction ---
skratos115/qwen2-7b-OpenDevin-f16
skratos115
2024-06-28T18:35:39Z
7
0
null
[ "gguf", "text-generation", "qwen2", "instruct", "unsloth", "OpenDevin", "dataset:xingyaoww/opendevin-code-act", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-06-27T21:16:33Z
--- license: mit tags: - text-generation - qwen2 - instruct - unsloth - OpenDevin datasets: - xingyaoww/opendevin-code-act --- ## Qwen2.7b.OpenDevin brought to you by skratos115 (HF) / Kingatlas115 (GH) in colaboration with the official Opendevin Team ~xingyaoww # Qwen2-7B-Instruct with OpenDevin Tool Calling ## Overview This project involves the fine-tuning of the `Qwen2-7B-Instruct` model using the [opendevin-code-act dataset](https://huggingface.co/datasets/xingyaoww/opendevin-code-act) with the help of Unsloth. The primary goal is to develop a more powerful LLM capable of effectively using the CodeAct framework for tool calling. This is still in early development and should not be used in production. We are working on building a bigger dataset for tool paths/ trajectories and could you all the help we can by using the feedback integration to help us build better trajectories and release to the public via MIT license for OSS model training. read more here:https://x.com/gneubig/status/1802740786242420896 and http://www.linkedin.com/feed/update/urn:li:activity:7208507606728929280/ ## Model Details - **Model Name**: Qwen2-7B-Instruct - **Dataset**: [opendevin-code-act](https://huggingface.co/datasets/xingyaoww/opendevin-code-act) - **Training Platform**: Unsloth provided full merged files or Quantized f16, q4_k_m, Q5_k_m, and Q8_0 gguf files. I used the qwen2.7b.OD.q4_k_m.gguf for my testing and got it to write me a simple script. more testing to come. ## Running the Model You can run this model using `vLLM` or `ollama`. The following instructions are for using `ollama`. ### Prerequisites - Docker - Hugging Face `transformers` library (version >= 4.37.0 is recommended) q-4k ollama run skratos115/qwen2-7b-opendevin-q4_k_m or f16 ollama run skratos115/qwen2-7b-opendevin-f16 ### Running with Ollama 1. **Install Docker**: Ensure you have Docker installed on your machine. 2. **Pull the Latest Hugging Face Transformers**: pip install transformers>=4.37.0 3. **Set Up Your Workspace**: WORKSPACE_BASE=$(pwd)/workspace 4. **Run the Docker Command**: docker run -it \ --pull=always \ -e SANDBOX_USER_ID=$(id -u) \ -e PERSIST_SANDBOX="true" \ -e LLM_API_KEY="ollama" \ -e LLM_BASE_URL="http://[yourIPhere or 0.0.0.0]:11434" \ -e SSH_PASSWORD="make something up here" \ -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \ -v $WORKSPACE_BASE:/opt/workspace_base \ -v /var/run/docker.sock:/var/run/docker.sock \ -p 3000:3000 \ --add-host host.docker.internal:host-gateway \ --name opendevin-app-$(date +%Y%m%d%H%M%S) \ ghcr.io/opendevin/opendevin:main Replace `[yourIPhere or 0.0.0.0]` with your actual IP address or use `0.0.0.0` for localhost. ## Early Development This project is in its early stages, and we are continuously working to improve the model and its capabilities. Contributions and feedback are welcome. ## Support my work Right now all of my work has been funded personally, if you like my work and can help support growth in the AI community consider joining or donating to my Patreon. [Patreon Link](https://www.patreon.com/atlasaisecurity) ## License This project is licensed under the [MIT License](LICENSE).
skratos115/qwen2-7b-OpenDevin-q5_k_m
skratos115
2024-06-28T18:27:22Z
4
0
null
[ "gguf", "text-generation", "qwen2", "instruct", "unsloth", "OpenDevin", "dataset:xingyaoww/opendevin-code-act", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-06-27T21:27:38Z
--- license: mit tags: - text-generation - qwen2 - instruct - unsloth - OpenDevin datasets: - xingyaoww/opendevin-code-act --- ## Qwen2.7b.OpenDevin brought to you by skratos115 (HF) / Kingatlas115 (GH) in colaboration with the official Opendevin Team ~xingyaoww # Qwen2-7B-Instruct with OpenDevin Tool Calling ## Overview This project involves the fine-tuning of the `Qwen2-7B-Instruct` model using the [opendevin-code-act dataset](https://huggingface.co/datasets/xingyaoww/opendevin-code-act) with the help of Unsloth. The primary goal is to develop a more powerful LLM capable of effectively using the CodeAct framework for tool calling. This is still in early development and should not be used in production. We are working on building a bigger dataset for tool paths/ trajectories and could you all the help we can by using the feedback integration to help us build better trajectories and release to the public via MIT license for OSS model training. read more here:https://x.com/gneubig/status/1802740786242420896 and http://www.linkedin.com/feed/update/urn:li:activity:7208507606728929280/ ## Model Details - **Model Name**: Qwen2-7B-Instruct - **Dataset**: [opendevin-code-act](https://huggingface.co/datasets/xingyaoww/opendevin-code-act) - **Training Platform**: Unsloth provided full merged files or Quantized f16, q4_k_m, Q5_k_m, and Q8_0 gguf files. I used the qwen2.7b.OD.q4_k_m.gguf for my testing and got it to write me a simple script. more testing to come. ## Running the Model You can run this model using `vLLM` or `ollama`. The following instructions are for using `ollama`. ### Prerequisites - Docker - Hugging Face `transformers` library (version >= 4.37.0 is recommended) ### Running with Ollama 1. **Install Docker**: Ensure you have Docker installed on your machine. 2. **Pull the Latest Hugging Face Transformers**: pip install transformers>=4.37.0 3. **Set Up Your Workspace**: WORKSPACE_BASE=$(pwd)/workspace 4. **Run the Docker Command**: docker run -it \ --pull=always \ -e SANDBOX_USER_ID=$(id -u) \ -e PERSIST_SANDBOX="true" \ -e LLM_API_KEY="ollama" \ -e LLM_BASE_URL="http://[yourIPhere or 0.0.0.0]:11434" \ -e SSH_PASSWORD="make something up here" \ -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \ -v $WORKSPACE_BASE:/opt/workspace_base \ -v /var/run/docker.sock:/var/run/docker.sock \ -p 3000:3000 \ --add-host host.docker.internal:host-gateway \ --name opendevin-app-$(date +%Y%m%d%H%M%S) \ ghcr.io/opendevin/opendevin:main Replace `[yourIPhere or 0.0.0.0]` with your actual IP address or use `0.0.0.0` for localhost. ## Early Development This project is in its early stages, and we are continuously working to improve the model and its capabilities. Contributions and feedback are welcome. ## Support my work Right now all of my work has been funded personally, if you like my work and can help support growth in the AI community consider joining or donating to my Patreon. [Patreon Link](https://www.patreon.com/atlasaisecurity) ## License This project is licensed under the [MIT License](LICENSE).
ILKT/2024-06-24_22-31-18_epoch_70
ILKT
2024-06-28T18:26:45Z
140
0
sentence-transformers
[ "sentence-transformers", "safetensors", "ILKT", "sentence-similarity", "mteb", "feature-extraction", "custom_code", "en", "pl", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-25T18:45:06Z
--- language: - en - pl model-index: - name: 2024-06-24_22-31-18_epoch_70 results: - dataset: config: default name: MTEB AllegroReviews revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6 split: test type: PL-MTEB/allegro-reviews metrics: - type: accuracy value: 24.115308151093434 - type: f1 value: 21.80844479100129 task: type: Classification - dataset: config: default name: MTEB CBD revision: 36ddb419bcffe6a5374c3891957912892916f28d split: test type: PL-MTEB/cbd metrics: - type: accuracy value: 54.96 - type: ap value: 15.975489143022825 - type: f1 value: 46.83152716570406 task: type: Classification - dataset: config: default name: MTEB CDSC-E revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d split: test type: PL-MTEB/cdsce-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB CDSC-R revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd split: test type: PL-MTEB/cdscr-sts metrics: [] task: type: STS - dataset: config: default name: MTEB EightTagsClustering revision: 78b962b130c6690659c65abf67bf1c2f030606b6 split: test type: PL-MTEB/8tags-clustering metrics: - type: v_measure value: 10.734363928408763 - type: v_measure_std value: 2.116834644117752 task: type: Clustering - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 29.620040349697373 - type: f1 value: 27.769851853273774 task: type: Classification - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: validation type: mteb/amazon_massive_intent metrics: - type: accuracy value: 28.8047220855878 - type: f1 value: 26.228250502335253 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 37.46133154001345 - type: f1 value: 35.909950764169 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: validation type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 37.486473192326606 - type: f1 value: 36.20974947505478 task: type: Classification - dataset: config: default name: MTEB PAC revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 split: test type: laugustyniak/abusive-clauses-pl metrics: - type: accuracy value: 61.89110918042282 - type: ap value: 72.10070914897945 - type: f1 value: 57.89426378304563 task: type: Classification - dataset: config: default name: MTEB PSC revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669 split: test type: PL-MTEB/psc-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB PlscClusteringP2P revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b split: test type: PL-MTEB/plsc-clustering-p2p metrics: - type: v_measure value: 36.15342836510209 task: type: Clustering - dataset: config: default name: MTEB PlscClusteringS2S revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a split: test type: PL-MTEB/plsc-clustering-s2s metrics: - type: v_measure value: 32.24500312313666 task: type: Clustering - dataset: config: default name: MTEB PolEmo2.0-IN revision: d90724373c70959f17d2331ad51fb60c71176b03 split: test type: PL-MTEB/polemo2_in metrics: - type: accuracy value: 49.01662049861496 - type: f1 value: 50.49138745910867 task: type: Classification - dataset: config: default name: MTEB PolEmo2.0-OUT revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4 split: test type: PL-MTEB/polemo2_out metrics: - type: accuracy value: 26.7004048582996 - type: f1 value: 20.54151599200167 task: type: Classification - dataset: config: default name: MTEB SICK-E-PL revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9 split: test type: PL-MTEB/sicke-pl-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB SICK-R-PL revision: fd5c2441b7eeff8676768036142af4cfa42c1339 split: test type: PL-MTEB/sickr-pl-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STS22 (pl) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: dev type: mteb/stsb_multi_mt metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: test type: mteb/stsb_multi_mt metrics: [] task: type: STS pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - mteb - feature-extraction ---
skratos115/qwen2-7b-OpenDevin-q8_o
skratos115
2024-06-28T18:25:57Z
6
0
null
[ "gguf", "text-generation", "qwen2", "instruct", "unsloth", "OpenDevin", "dataset:xingyaoww/opendevin-code-act", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-06-27T21:48:18Z
--- license: mit tags: - text-generation - qwen2 - instruct - unsloth - OpenDevin datasets: - xingyaoww/opendevin-code-act --- ## Qwen2.7b.OpenDevin brought to you by skratos115 (HF) / Kingatlas115 (GH) in colaboration with the official Opendevin Team ~xingyaoww # Qwen2-7B-Instruct with OpenDevin Tool Calling ## Overview This project involves the fine-tuning of the `Qwen2-7B-Instruct` model using the [opendevin-code-act dataset](https://huggingface.co/datasets/xingyaoww/opendevin-code-act) with the help of Unsloth. The primary goal is to develop a more powerful LLM capable of effectively using the CodeAct framework for tool calling. This is still in early development and should not be used in production. We are working on building a bigger dataset for tool paths/ trajectories and could you all the help we can by using the feedback integration to help us build better trajectories and release to the public via MIT license for OSS model training. read more here:https://x.com/gneubig/status/1802740786242420896 and http://www.linkedin.com/feed/update/urn:li:activity:7208507606728929280/ ## Model Details - **Model Name**: Qwen2-7B-Instruct - **Dataset**: [opendevin-code-act](https://huggingface.co/datasets/xingyaoww/opendevin-code-act) - **Training Platform**: Unsloth provided full merged files or Quantized f16, q4_k_m, Q5_k_m, and Q8_0 gguf files. I used the qwen2.7b.OD.q4_k_m.gguf for my testing and got it to write me a simple script. more testing to come. ## Running the Model You can run this model using `vLLM` or `ollama`. The following instructions are for using `ollama`. ### Prerequisites - Docker - Hugging Face `transformers` library (version >= 4.37.0 is recommended) ### Running with Ollama 1. **Install Docker**: Ensure you have Docker installed on your machine. 2. **Pull the Latest Hugging Face Transformers**: pip install transformers>=4.37.0 3. **Set Up Your Workspace**: WORKSPACE_BASE=$(pwd)/workspace 4. **Run the Docker Command**: docker run -it \ --pull=always \ -e SANDBOX_USER_ID=$(id -u) \ -e PERSIST_SANDBOX="true" \ -e LLM_API_KEY="ollama" \ -e LLM_BASE_URL="http://[yourIPhere or 0.0.0.0]:11434" \ -e SSH_PASSWORD="make something up here" \ -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \ -v $WORKSPACE_BASE:/opt/workspace_base \ -v /var/run/docker.sock:/var/run/docker.sock \ -p 3000:3000 \ --add-host host.docker.internal:host-gateway \ --name opendevin-app-$(date +%Y%m%d%H%M%S) \ ghcr.io/opendevin/opendevin:main Replace `[yourIPhere or 0.0.0.0]` with your actual IP address or use `0.0.0.0` for localhost. ## Early Development This project is in its early stages, and we are continuously working to improve the model and its capabilities. Contributions and feedback are welcome. ## Support my work Right now all of my work has been funded personally, if you like my work and can help support growth in the AI community consider joining or donating to my Patreon. [Patreon Link](https://www.patreon.com/atlasaisecurity) ## License This project is licensed under the [MIT License](LICENSE).
ILKT/2024-06-24_22-31-18_epoch_69
ILKT
2024-06-28T18:25:35Z
140
0
sentence-transformers
[ "sentence-transformers", "safetensors", "ILKT", "sentence-similarity", "mteb", "feature-extraction", "custom_code", "en", "pl", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-25T18:25:58Z
--- language: - en - pl model-index: - name: 2024-06-24_22-31-18_epoch_69 results: - dataset: config: default name: MTEB AllegroReviews revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6 split: test type: PL-MTEB/allegro-reviews metrics: - type: accuracy value: 23.807157057654074 - type: f1 value: 20.74866830583465 task: type: Classification - dataset: config: default name: MTEB CBD revision: 36ddb419bcffe6a5374c3891957912892916f28d split: test type: PL-MTEB/cbd metrics: - type: accuracy value: 53.43 - type: ap value: 15.370300273373735 - type: f1 value: 45.485816633592535 task: type: Classification - dataset: config: default name: MTEB CDSC-E revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d split: test type: PL-MTEB/cdsce-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB CDSC-R revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd split: test type: PL-MTEB/cdscr-sts metrics: [] task: type: STS - dataset: config: default name: MTEB EightTagsClustering revision: 78b962b130c6690659c65abf67bf1c2f030606b6 split: test type: PL-MTEB/8tags-clustering metrics: - type: v_measure value: 8.34157848929292 - type: v_measure_std value: 1.6835064788653904 task: type: Clustering - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 29.852051109616678 - type: f1 value: 27.65059576149131 task: type: Classification - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: validation type: mteb/amazon_massive_intent metrics: - type: accuracy value: 29.729463846532223 - type: f1 value: 26.742962510648756 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 38.97108271687962 - type: f1 value: 37.044830848927745 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: validation type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 38.780127889818004 - type: f1 value: 37.32570314592107 task: type: Classification - dataset: config: default name: MTEB PAC revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 split: test type: laugustyniak/abusive-clauses-pl metrics: - type: accuracy value: 61.1120764552563 - type: ap value: 72.05719264653985 - type: f1 value: 57.694837317259505 task: type: Classification - dataset: config: default name: MTEB PSC revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669 split: test type: PL-MTEB/psc-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB PlscClusteringP2P revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b split: test type: PL-MTEB/plsc-clustering-p2p metrics: - type: v_measure value: 36.074828238780164 task: type: Clustering - dataset: config: default name: MTEB PlscClusteringS2S revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a split: test type: PL-MTEB/plsc-clustering-s2s metrics: - type: v_measure value: 31.834564414565435 task: type: Clustering - dataset: config: default name: MTEB PolEmo2.0-IN revision: d90724373c70959f17d2331ad51fb60c71176b03 split: test type: PL-MTEB/polemo2_in metrics: - type: accuracy value: 46.10803324099724 - type: f1 value: 46.81820320119227 task: type: Classification - dataset: config: default name: MTEB PolEmo2.0-OUT revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4 split: test type: PL-MTEB/polemo2_out metrics: - type: accuracy value: 22.08502024291498 - type: f1 value: 19.404680394030223 task: type: Classification - dataset: config: default name: MTEB SICK-E-PL revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9 split: test type: PL-MTEB/sicke-pl-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB SICK-R-PL revision: fd5c2441b7eeff8676768036142af4cfa42c1339 split: test type: PL-MTEB/sickr-pl-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STS22 (pl) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: dev type: mteb/stsb_multi_mt metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: test type: mteb/stsb_multi_mt metrics: [] task: type: STS pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - mteb - feature-extraction ---
srinivasan-sridhar28/distilbert-base-uncased-finetuned-imdb
srinivasan-sridhar28
2024-06-28T18:24:08Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "fill-mask", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-06-28T18:08:56Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3472 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8573 | 1.0 | 157 | 2.4537 | | 2.0705 | 2.0 | 314 | 2.4086 | | 2.2841 | 3.0 | 471 | 2.4206 | | 2.4046 | 4.0 | 628 | 2.3390 | | 2.3871 | 5.0 | 785 | 2.3809 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
AIEKEK/xlm-roberta-base-finetuned-panx-de
AIEKEK
2024-06-28T18:17:24Z
104
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-06-28T16:58:10Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1336 - F1: 0.8593 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2517 | 1.0 | 525 | 0.1447 | 0.8336 | | 0.1277 | 2.0 | 1050 | 0.1397 | 0.8476 | | 0.0818 | 3.0 | 1575 | 0.1336 | 0.8593 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.3.0 - Datasets 2.16.1 - Tokenizers 0.15.2
ILKT/2024-06-24_22-31-18_epoch_67
ILKT
2024-06-28T18:14:57Z
140
0
sentence-transformers
[ "sentence-transformers", "safetensors", "ILKT", "sentence-similarity", "mteb", "feature-extraction", "custom_code", "en", "pl", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-25T17:48:37Z
--- language: - en - pl model-index: - name: 2024-06-24_22-31-18_epoch_67 results: - dataset: config: default name: MTEB AllegroReviews revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6 split: test type: PL-MTEB/allegro-reviews metrics: - type: accuracy value: 24.622266401590455 - type: f1 value: 22.936267682156487 task: type: Classification - dataset: config: default name: MTEB CBD revision: 36ddb419bcffe6a5374c3891957912892916f28d split: test type: PL-MTEB/cbd metrics: - type: accuracy value: 53.48 - type: ap value: 15.322095521539064 - type: f1 value: 45.49225512083147 task: type: Classification - dataset: config: default name: MTEB CDSC-E revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d split: test type: PL-MTEB/cdsce-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB CDSC-R revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd split: test type: PL-MTEB/cdscr-sts metrics: [] task: type: STS - dataset: config: default name: MTEB EightTagsClustering revision: 78b962b130c6690659c65abf67bf1c2f030606b6 split: test type: PL-MTEB/8tags-clustering metrics: - type: v_measure value: 9.363928383066206 - type: v_measure_std value: 1.3367977820048715 task: type: Clustering - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 26.54001344989913 - type: f1 value: 23.96832609186341 task: type: Classification - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: validation type: mteb/amazon_massive_intent metrics: - type: accuracy value: 26.015740285292676 - type: f1 value: 23.212345772348385 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 34.862138533960994 - type: f1 value: 32.8318592868999 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: validation type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 34.63354648303001 - type: f1 value: 33.231436557685505 task: type: Classification - dataset: config: default name: MTEB PAC revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 split: test type: laugustyniak/abusive-clauses-pl metrics: - type: accuracy value: 65.54590211410367 - type: ap value: 74.21876513105504 - type: f1 value: 62.16874555498553 task: type: Classification - dataset: config: default name: MTEB PSC revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669 split: test type: PL-MTEB/psc-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB PlscClusteringP2P revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b split: test type: PL-MTEB/plsc-clustering-p2p metrics: - type: v_measure value: 35.760616638633856 task: type: Clustering - dataset: config: default name: MTEB PlscClusteringS2S revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a split: test type: PL-MTEB/plsc-clustering-s2s metrics: - type: v_measure value: 32.24926171089566 task: type: Clustering - dataset: config: default name: MTEB PolEmo2.0-IN revision: d90724373c70959f17d2331ad51fb60c71176b03 split: test type: PL-MTEB/polemo2_in metrics: - type: accuracy value: 49.279778393351805 - type: f1 value: 49.51142756516184 task: type: Classification - dataset: config: default name: MTEB PolEmo2.0-OUT revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4 split: test type: PL-MTEB/polemo2_out metrics: - type: accuracy value: 18.157894736842103 - type: f1 value: 15.771804883173445 task: type: Classification - dataset: config: default name: MTEB SICK-E-PL revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9 split: test type: PL-MTEB/sicke-pl-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB SICK-R-PL revision: fd5c2441b7eeff8676768036142af4cfa42c1339 split: test type: PL-MTEB/sickr-pl-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STS22 (pl) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: dev type: mteb/stsb_multi_mt metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: test type: mteb/stsb_multi_mt metrics: [] task: type: STS pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - mteb - feature-extraction ---
ILKT/2024-06-24_22-31-18_epoch_65
ILKT
2024-06-28T18:04:30Z
140
0
sentence-transformers
[ "sentence-transformers", "safetensors", "ILKT", "sentence-similarity", "mteb", "feature-extraction", "custom_code", "en", "pl", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-25T17:09:49Z
--- language: - en - pl model-index: - name: 2024-06-24_22-31-18_epoch_65 results: - dataset: config: default name: MTEB AllegroReviews revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6 split: test type: PL-MTEB/allegro-reviews metrics: - type: accuracy value: 24.284294234592448 - type: f1 value: 22.277413998996426 task: type: Classification - dataset: config: default name: MTEB CBD revision: 36ddb419bcffe6a5374c3891957912892916f28d split: test type: PL-MTEB/cbd metrics: - type: accuracy value: 53.23 - type: ap value: 14.760186188077112 - type: f1 value: 44.877577191437354 task: type: Classification - dataset: config: default name: MTEB CDSC-E revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d split: test type: PL-MTEB/cdsce-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB CDSC-R revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd split: test type: PL-MTEB/cdscr-sts metrics: [] task: type: STS - dataset: config: default name: MTEB EightTagsClustering revision: 78b962b130c6690659c65abf67bf1c2f030606b6 split: test type: PL-MTEB/8tags-clustering metrics: - type: v_measure value: 7.492921935218096 - type: v_measure_std value: 1.1544515147427612 task: type: Clustering - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 28.449899125756563 - type: f1 value: 26.206394858233033 task: type: Classification - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: validation type: mteb/amazon_massive_intent metrics: - type: accuracy value: 28.224299065420567 - type: f1 value: 25.556535309581264 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 34.77135171486214 - type: f1 value: 33.58898828363504 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: validation type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 34.79094933595671 - type: f1 value: 33.86935454255312 task: type: Classification - dataset: config: default name: MTEB PAC revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 split: test type: laugustyniak/abusive-clauses-pl metrics: - type: accuracy value: 61.5059368664929 - type: ap value: 72.39105631460139 - type: f1 value: 58.287677162199735 task: type: Classification - dataset: config: default name: MTEB PSC revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669 split: test type: PL-MTEB/psc-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB PlscClusteringP2P revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b split: test type: PL-MTEB/plsc-clustering-p2p metrics: - type: v_measure value: 36.2986850948915 task: type: Clustering - dataset: config: default name: MTEB PlscClusteringS2S revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a split: test type: PL-MTEB/plsc-clustering-s2s metrics: - type: v_measure value: 31.6682523752145 task: type: Clustering - dataset: config: default name: MTEB PolEmo2.0-IN revision: d90724373c70959f17d2331ad51fb60c71176b03 split: test type: PL-MTEB/polemo2_in metrics: - type: accuracy value: 49.73684210526316 - type: f1 value: 50.66885015512346 task: type: Classification - dataset: config: default name: MTEB PolEmo2.0-OUT revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4 split: test type: PL-MTEB/polemo2_out metrics: - type: accuracy value: 24.47368421052632 - type: f1 value: 20.53223695541805 task: type: Classification - dataset: config: default name: MTEB SICK-E-PL revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9 split: test type: PL-MTEB/sicke-pl-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB SICK-R-PL revision: fd5c2441b7eeff8676768036142af4cfa42c1339 split: test type: PL-MTEB/sickr-pl-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STS22 (pl) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: dev type: mteb/stsb_multi_mt metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: test type: mteb/stsb_multi_mt metrics: [] task: type: STS pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - mteb - feature-extraction ---
RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf
RichardErkhov
2024-06-28T18:04:15Z
79
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2024-06-28T15:24:54Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) OrpoLlama-3-8B-memorize-translate - GGUF - Model creator: https://huggingface.co/ItchyChin/ - Original model: https://huggingface.co/ItchyChin/OrpoLlama-3-8B-memorize-translate/ | Name | Quant method | Size | | ---- | ---- | ---- | | [OrpoLlama-3-8B-memorize-translate.Q2_K.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q2_K.gguf) | Q2_K | 2.96GB | | [OrpoLlama-3-8B-memorize-translate.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [OrpoLlama-3-8B-memorize-translate.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.IQ3_S.gguf) | IQ3_S | 3.43GB | | [OrpoLlama-3-8B-memorize-translate.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [OrpoLlama-3-8B-memorize-translate.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.IQ3_M.gguf) | IQ3_M | 3.52GB | | [OrpoLlama-3-8B-memorize-translate.Q3_K.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q3_K.gguf) | Q3_K | 3.74GB | | [OrpoLlama-3-8B-memorize-translate.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [OrpoLlama-3-8B-memorize-translate.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [OrpoLlama-3-8B-memorize-translate.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [OrpoLlama-3-8B-memorize-translate.Q4_0.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q4_0.gguf) | Q4_0 | 4.34GB | | [OrpoLlama-3-8B-memorize-translate.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [OrpoLlama-3-8B-memorize-translate.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [OrpoLlama-3-8B-memorize-translate.Q4_K.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q4_K.gguf) | Q4_K | 4.58GB | | [OrpoLlama-3-8B-memorize-translate.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [OrpoLlama-3-8B-memorize-translate.Q4_1.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q4_1.gguf) | Q4_1 | 4.78GB | | [OrpoLlama-3-8B-memorize-translate.Q5_0.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q5_0.gguf) | Q5_0 | 5.21GB | | [OrpoLlama-3-8B-memorize-translate.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [OrpoLlama-3-8B-memorize-translate.Q5_K.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q5_K.gguf) | Q5_K | 5.34GB | | [OrpoLlama-3-8B-memorize-translate.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [OrpoLlama-3-8B-memorize-translate.Q5_1.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q5_1.gguf) | Q5_1 | 5.65GB | | [OrpoLlama-3-8B-memorize-translate.Q6_K.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q6_K.gguf) | Q6_K | 6.14GB | | [OrpoLlama-3-8B-memorize-translate.Q8_0.gguf](https://huggingface.co/RichardErkhov/ItchyChin_-_OrpoLlama-3-8B-memorize-translate-gguf/blob/main/OrpoLlama-3-8B-memorize-translate.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Niggendar/pixelpaintBeautiful_pony
Niggendar
2024-06-28T18:02:54Z
151
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-06-28T17:55:54Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ILKT/2024-06-24_22-31-18_epoch_63
ILKT
2024-06-28T17:57:51Z
140
0
sentence-transformers
[ "sentence-transformers", "safetensors", "ILKT", "sentence-similarity", "mteb", "feature-extraction", "custom_code", "en", "pl", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-25T16:31:16Z
--- language: - en - pl model-index: - name: 2024-06-24_22-31-18_epoch_63 results: - dataset: config: default name: MTEB AllegroReviews revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6 split: test type: PL-MTEB/allegro-reviews metrics: - type: accuracy value: 23.578528827037776 - type: f1 value: 20.90009720694687 task: type: Classification - dataset: config: default name: MTEB CBD revision: 36ddb419bcffe6a5374c3891957912892916f28d split: test type: PL-MTEB/cbd metrics: - type: accuracy value: 52.459999999999994 - type: ap value: 14.87705319359566 - type: f1 value: 44.71010123669066 task: type: Classification - dataset: config: default name: MTEB CDSC-E revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d split: test type: PL-MTEB/cdsce-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB CDSC-R revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd split: test type: PL-MTEB/cdscr-sts metrics: [] task: type: STS - dataset: config: default name: MTEB EightTagsClustering revision: 78b962b130c6690659c65abf67bf1c2f030606b6 split: test type: PL-MTEB/8tags-clustering metrics: - type: v_measure value: 7.928127147660552 - type: v_measure_std value: 1.6645763520468402 task: type: Clustering - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 23.67182246133154 - type: f1 value: 20.548969166047907 task: type: Classification - dataset: config: pl name: MTEB MassiveIntentClassification (pl) revision: 4672e20407010da34463acc759c162ca9734bca6 split: validation type: mteb/amazon_massive_intent metrics: - type: accuracy value: 23.664535169699953 - type: f1 value: 20.718400089026655 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 30.689307330195025 - type: f1 value: 28.88911729338991 task: type: Classification - dataset: config: pl name: MTEB MassiveScenarioClassification (pl) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: validation type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 30.34923757993114 - type: f1 value: 29.18785219509149 task: type: Classification - dataset: config: default name: MTEB PAC revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 split: test type: laugustyniak/abusive-clauses-pl metrics: - type: accuracy value: 59.78858963220388 - type: ap value: 71.45585761021971 - type: f1 value: 56.6172686091966 task: type: Classification - dataset: config: default name: MTEB PSC revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669 split: test type: PL-MTEB/psc-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB PlscClusteringP2P revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b split: test type: PL-MTEB/plsc-clustering-p2p metrics: - type: v_measure value: 36.03293025208843 task: type: Clustering - dataset: config: default name: MTEB PlscClusteringS2S revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a split: test type: PL-MTEB/plsc-clustering-s2s metrics: - type: v_measure value: 31.642275273328757 task: type: Clustering - dataset: config: default name: MTEB PolEmo2.0-IN revision: d90724373c70959f17d2331ad51fb60c71176b03 split: test type: PL-MTEB/polemo2_in metrics: - type: accuracy value: 46.551246537396125 - type: f1 value: 47.86798958676618 task: type: Classification - dataset: config: default name: MTEB PolEmo2.0-OUT revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4 split: test type: PL-MTEB/polemo2_out metrics: - type: accuracy value: 18.927125506072876 - type: f1 value: 17.117804408762236 task: type: Classification - dataset: config: default name: MTEB SICK-E-PL revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9 split: test type: PL-MTEB/sicke-pl-pairclassification metrics: [] task: type: PairClassification - dataset: config: default name: MTEB SICK-R-PL revision: fd5c2441b7eeff8676768036142af4cfa42c1339 split: test type: PL-MTEB/sickr-pl-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STS22 (pl) revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: dev type: mteb/stsb_multi_mt metrics: [] task: type: STS - dataset: config: pl name: MTEB STSBenchmarkMultilingualSTS (pl) revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c split: test type: mteb/stsb_multi_mt metrics: [] task: type: STS pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - mteb - feature-extraction ---
mlx-community/Hercules-5.0-Qwen2-1.5B-8bits
mlx-community
2024-06-28T17:53:16Z
7
0
mlx
[ "mlx", "safetensors", "qwen2", "en", "dataset:Locutusque/hercules-v5.0", "license:apache-2.0", "region:us" ]
null
2024-06-28T17:37:40Z
--- language: - en license: apache-2.0 tags: - mlx datasets: - Locutusque/hercules-v5.0 inference: parameters: do_sample: true temperature: 0.8 top_p: 0.95 top_k: 40 min_p: 0.1 max_new_tokens: 250 repetition_penalty: 1.1 --- # mlx-community/Hercules-5.0-Qwen2-1.5B-8bits The Model [mlx-community/Hercules-5.0-Qwen2-1.5B-8bits](https://huggingface.co/mlx-community/Hercules-5.0-Qwen2-1.5B-8bits) was converted to MLX format from [M4-ai/Hercules-5.0-Qwen2-1.5B](https://huggingface.co/M4-ai/Hercules-5.0-Qwen2-1.5B) using mlx-lm version **0.14.0**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Hercules-5.0-Qwen2-1.5B-8bits") response = generate(model, tokenizer, prompt="hello", verbose=True) ```